As soon as Tom Smith got his hands on a codex – a new artificial intelligence technology that writes his own computer programs – he interviewed him for a job.
He asked if it could face the “coding challenges” that programmers often face when interviewing for big-money jobs at Silicon Valley companies like Google and Facebook. Can he write a program that replaces all the spaces in a line with dashes? Even better, can it write to recognize invalid zip codes?
Before completing many other tasks, he does both immediately. “These are problems that will be difficult for many humans to solve. I am also involved, and he will write a response in two seconds.” Smith, an experienced programmer who oversees an AI start-up called Gado Images. “It was scary to watch.”
Codex seemed like a technology that would soon replace human workers. As Mr. As Smith continued to test the system, he realized that his skills were well extended to answer ready-made interview questions. It can also translate from one programming language to another.
Yet a few weeks after working with this new technology, Mr. Smith believes he poses no threat to professional coders. In fact, like many other experts, he sees it as a tool that will increase human productivity. It can also help a whole new generation learn computer skills, showing them how to write simple pieces of code like a personal tutor.
“This is a tool that can make a coder’s life a lot easier,” Mr. Said Smith.
About four years ago, researchers at labs like OpenAI began creating a neural network that analyzed a large number of prose, including thousands of digital books, Wikipedia articles, and all other texts posted on the Internet.
All of them learned to predict the next word in the networks order, pointing to the pattern in the text. When someone wrote a few words in these “universal language models”, they were able to complete the idea with the whole paragraph. This way, a system એક an OpenAI creation called GPA-3-could write its own Twitter posts, speeches, poems, and news articles.
The researchers also expressed surprise that those who created the system could also write their own computer programs, although they were shorter and simpler. Apparently, she learned from numerous programs posted on the Internet. So OpenAI went a step further, training the new system – Codex – on a wide range of both prose and code.
The result is a system that understands both prose and code – to a point. You can ask in plain English for snow to fall on a black background, and it will give you code that creates a virtual snowstorm. If you ask for a blue bouncing ball, he will give it to you.
“You can ask him to do something, and he will do it,” said Ania Kubov, another programmer who uses the technology.
Codex can generate programs in 12 computer languages and even translate between them. But he often makes mistakes, and although his skills are impressive, he cannot reason like a man. He may recognize or imitate what he has seen in the past, but he is not clever enough to think for himself.
Sometimes, programs generated by Codex do not run. Or it has security flaws. Or they get nowhere near what you want them to do. OpenAI estimates that Codex generates the correct code 37 percent of the time.
When Mr. Smith used the system as part of a “beta” testing program this summer, the code he created was impressive. But sometimes, it only works when it makes a small change, such as tweaking a command corresponding to its specific software setup or adding the digital code needed to access the Internet service it was trying to ask for.
In other words, the codex was only really useful for an experienced programmer.
But it can help programmers do their daily work much faster. It can help them find the basic building blocks they need or point them to new ideas. Using technology, GitHub, a popular online service for programmers, now offers a copylet, a tool that indicates the next line of your code, while “autocomplete” tools indicate the next word when you type text or emails.
“There’s a way to write code without writing as much code,” said Jeremy Howard, founder of the artificial intelligence lab Fast.ai and who helped build OpenAI’s work-based language technology. “It’s not always true, but it’s close enough.”
Mr. Howard and others believe that codecs can also help beginners learn the code. It is especially good for producing simple applications from abbreviated English descriptions. And it also works in another direction by explaining complex code in plain English. Some, including Swedish entrepreneur Joel Hellermark, are already trying to turn the system into a tool of education.
The rest of the AI landscape looks similar. Robots are increasingly powerful. There are chatbots designed for online communication. Deep Mind, an AI lab in London, recently created a system that instantly recognizes the shape of proteins in the human body, which is a key part of the formulation of new drugs and vaccines. That work once took scientists days or even years. But those systems change only a small part of what human experts can do.
In a few areas where new machines can replace workers immediately, they are usually in jobs that are slow to fill the market. Robots, for example, are increasingly useful inside shipping centers, which are expanding and struggling to find the workers needed to maintain speed.
With its launch, cart images, Mr. Smith set out to create a system that can automatically sort through photo archives from newspapers and libraries, restore forgotten photos, automatically write captions and tags, and share photos with other publications and businesses. But technology can handle only part of the job.
It can examine a huge photo archive compared to humans, which can identify images that are useful and attack captions. But to find the best and most important photos and tag them properly still needs an experienced archive.
“We thought these tools would completely eliminate the need for men, but what we learned many years later was that this wasn’t really possible – you still needed a skilled man to review the output.” Said Smith. “Technology makes things wrong. And it can be biased. You still need a person to review what he did and decide what is good and what is not. “
The Codex expands on what a machine can do, but it’s another indication that the technology works best with humans on controls.
“AI is not playing as well as anyone expected,” said Greg Brockman, OpenAi’s chief technology officer. “It felt like she was going to do this job and that job, and everyone was trying to figure out which one would go first. Instead, it changes any jobs. But it is stripping the drug work from it all at once. ”