A high school teacher who made the school website. A broker who makes complex linked Excel sheets that update with visual basic macros. A graphic designer who programs simple interactive animations using HTML5. An assistant at a marketing firm who writes sophisticated database queries that find exactly the demographic slice the marketers are looking for. A lawyer who writes regular expressions that search and alter terms in large contracts. A scientist who programs an Arduino to run a piece of lab equipment.
Are these people programmers?
From the point of view of a “real” programmer, the answer is clearly no. At best they might generously allow that such a person is a “scripter” or some other diminutive. If a trained programmer with a computer science degree saw their code, the programmer’s reaction would probably be like this xkcd cartoon. They might say it’s okay for amateur code, but not programming.
And yet calling these part time coders “amateurs” is literally wrong by definition. These people are actually getting paid for their code. And what they make is often as essential to their organizations as the work that “real” programmers are doing. What they lack in depth of computer science knowledge they make up for in knowing exactly what’s needed and in being there able to do it for whatever they were already getting paid.
What do these para-coders have in common? They were mostly hired to do something else, but used their coding skills to make themselves more valuable to the organization they work for. They are adding to their resume and making themselves more desirable as employees. Also they are doing something extremely valuable that “real” programmers wouldn’t take the time to do.
Right now such employees are still exceptions. But this kind of situation is becoming more and more common. The things people are required to do on computers is becoming more complex. More websites, cloud data sources and even commercial products have APIs to interact with and control them. People who can manipulate these things are going to become more and more in demand. And companies are going to want people educated in these skills.
That’s why I predict that within 20 or 30 years, programming will move from being an elective to being a core subject that everyone will have to learn at least a little. I’m not saying everyone will need to code in their daily lives; most still won’t. Most people today can go through their lives without ever needing to factor a polynomial, but we still expect every high school student to learn it, at least long enough to pass the state test.
And this example from math is illustrative in another way. Factoring polynomials is essential if you’re going on to learn calculus, which most people don’t. Otherwise, it’s not something that comes up much. Many people argue that we’d be better off teaching students less algebra and more statistics, for example. I won’t weigh in on that, but since the curriculum for coding isn’t set in stone the way it is for math, now is a good time to think of what students should be learning in a coding class.
Certainly the sorts of things you see in most programming languages: things like if and else statements, for and while loops, variables and methods or functions. If everyone had a basic understanding of these concepts, that would be useful enough. But say we wanted to go a little further, what else would we teach them?
Some people would want everyone to have a basic understanding of HTML. Others might want students to learn more about how computer memory works. Others might want students to learn to work with different kinds of APIs.
One thing I’m confident of is we have to start figuring out how to prepare students for this future. That means we need to start figuring out what a programming class should be for students who aren’t natural coders. What that looks like is going to be a central topic for this blog, and I’ll be sharing some ideas I have soon.