Last week the tech community went crazy over a new AI tool called GPT-3. Twitter was abuzz with developers showcasing what they’d been able to create using GPT-3 and the ensuing wows. GPT-3 is still in its adolescence and has quite a way to go before you and I will be using it. That said, for non-programmers it’s an interesting innovation to have on the radar, to begin to understand its potential—both positive and negative.
What is GPT-3?
It’s an AI predictive language model being developed by a San Francisco-based and Elon-Musk-backed startup called OpenAI. In laymen’s terms, GPT-3 has basically eaten all of the text available on the Internet. It can then generate language in response to a request, based on what is statistically plausible from what it has eaten. Last week OpenAI made GTP-3 available to certain members of the public through an API.
Does it matter that it’s open source?
Yes, for a number of reasons. In general, open-source software benefits the community by keeping costs down, encouraging maintenance, and providing transparency. Because AI development has ethical and safety issues, transparency is especially important. OpenAI defines its role like this: “Our mission is to ensure that artificial general intelligence benefits all of humanity.”
What can GPT-3 do?
The range of tasks for GPT-3 is pretty mind-boggling, from coding to writing poetry. Because it responds to language, non-programmers can speak or write the result they desire, and GPT-3 will write the code. Type “build a button that computes sales tax on the total,” and your button will appear, with the corresponding code in the sidebar. Will this render developers irrelevant? No. But it will open up some tech creation to non-developers.
GPT-3 also has the potential to revolutionize content creation, at best allowing this to happen much more efficiently and at worst calling into question the veracity of any content you read. (More on that later.)
What are some examples?
Those who have been playing with GPT-3 have been sharing what they’ve been able to do with it. One researcher asked GPT-3 to transform poetry classics in a new way. The results are pretty stunning. Another Twitter user said he began a document on how to run an effective board meeting, then asked GPT-3 to write the rest. He loved the results—including ideas he hadn’t thought of and will leave in the presentation, like three steps to recruiting board members.
There must be a downside to this technology, right?
Absolutely. GPT-3 is certainly rife with inaccuracies in what it creates. OpenAI’s CEO, Sam Altman, admitted on Twitter: “GPT-3 samples [can] lose coherence over sufficiently long passages, contradict themselves, and occasionally contain non-sequitur sentences or paragraphs.” And while this is certainly going to improve in future versions of the model, it’s still missing human oversight.
Once GPT-3 or other predictive language model use is widespread, how will we know what was written by a person or AI? There is speculation that in the future, writing created by a human–whether it’s an email or a blog article–will require some kind of authentication or verified human-generated status.
Why you should care as a non-developer?
First, because it’s pretty mind-blowing. As Altman explained, “For non-programmers, it’s like experiencing the magic of programming for the first time.” But seriously, eventually, this type of tool will likely become available to all. How will this affect work and jobs? Will it cause some roles to become obsolete? Will it cause new roles to be created? Certainly, there will be a sorting-out period, as we figure out the smartest places to use it. As well as the dumbest places to use it. No doubt the latter will include some huge goofs.
It’s early days. But the wise among us will start to envision how AI like this will impact our work and our careers–both for better and worse.