Computers

Computers can write their own code. So are programmers now obsolete? | John Naughton

I studied engineering at university and, like most of my contemporaries, found that I sometimes needed to write computer programs to do certain kinds of calculations. These pieces of utilitarian software were written in languages ​​now regarded as the programming equivalent of Latin – Fortran, Algol and Pascal – and what I learned from the experience was that I was not a born hacker. The software I wrote was clumsy and inefficient and more talented programmers would look at it and roll their eyes, much as Rory McIlroy might do if required to play a round with an 18-handicap golfer. But it did the job and in that sense was, in the laconic phrase sometimes used by the great computer scientist Roger Needham, “good enough for government work”. And what I took away from the experience was a lifelong respect for programmers who can write elegant, efficient code. Anyone who thinks programming is easy has never done it.

All of which goes to explain why I sat up when, last year, someone realised that Codex, an offspring of GPT-3, a large neural network trained on vast troves of text gathered from the web that could generate plausible English text, could write apps, ie, short computer programs including buttons, text input fields and colours, by remixing snippets of code it had been fed. So you could ask the program to write code to do a simple task – “make a snowstorm on a black background”, for example – and it would write and run the necessary code in Javascript. In no time at all, there were tech startups such as SourceAI aimed at harnessing this new programming tool.

This was impressive, quirky and perhaps useful in some contexts, but really it was just picking low-hanging fruit. Apps are small programs and the kinds of tasks Codex can do are ones that can be described succinctly in ordinary language. All the software has to do is to search through the huge repository of computer code that exists in its database and find a match that will do the job. No real inference or reasoning is required.

At this point, DeepMind, the London-based AI company, became interested in the problem. DeepMind is famous for developing the Go-playing world champion AlphaGo and AlphaFold, the machine-learning system that seems better at predicting protein structures than any human. Recently, it announced that it had developed AlphaCode, a new programming engine potentially capable of outperforming many human developers.

In classic DeepMind style, the company decided to see how its system would perform on 10 challenges on Codeforces, a platform that hosts worldwide competitive programming contests. Although these challenges are not typical of the average day-to-day workload of programmers, the ability to solve the problems it sets in a creative manner is a good indicator of the programming ability. AlphaCode is the first ever AI system capable of competing with humans in this context.

Here’s what’s involved: competitors are given five to 10 problems expressed in natural language and allowed three hours to write programs to creatively solve as many problems as possible. This is a much more demanding task than simply specifying an app. For each problem, participants have to read and understand: a natural language description (spanning numerous paragraphs) that contains a narrative background to the problem; a description of the desired solution that competitors need to understand and parse carefully; a specification of the required input and output format; and one or more example input/output pairs. Then they have to write an efficient program that solves the problem. And finally, they have to run the program.

The key step – going from problem statement to coming up with a solution – is what makes the competition such a stiff test for a machine, because it requires understanding and reasoning about the problem, plus a deep comprehension of a wide range of algorithms and data structures. The impressive thing about the design of the Codeforces competitions is that it’s not possible to solve problems through shortcuts, such as duplicating solutions seen before or trying out every potentially related algorithm. To do well, you have to be creative.

So how did AlphaCode do? Quite well, is the answer. “Overall”, DeepMind reports, it came out “at the level of the median competitor. Although far from winning competitions, this result represents a substantial leap in AI problem-solving capabilities and we hope that our results will inspire the competitive programming community.”

Translation: “We’ll be back.”

They will. This is beginning to look like the story of Go-playing and protein folding; In both cases, the DeepMind machine starts at the median level and then rapidly outpaces human competition. It will be a quick learner. Does that mean that programmers will become obsolete? No, because software engineering is about building systems, not just about solving discrete puzzles. But if I had to write software now, it would be reassuring to have such a machine as an assistant.

What I’ve been reading

Eat your words
Cooking with Virginia Woolf is a lovely essay by Valerie Stivers in the Paris Review on how the author of To the Lighthouse didn’t know much about boeuf en daube.

Keeping on rollin’
John Seabrook reflects on Ford’s decision to electrify its much-loved F-150 truck in a long New Yorker piece, America’s Favorite Pickup Truck Goes Electric.

Spotify’s true colors
A neat blogpost by Damon Krukowski, The Big Short of Streaming, dissects Spotify’s attempt to defuse the Joe Rogan controversy. TL;DR summary: Spotify is a tech company, not a music one.

About the author

Publishing Team