It isn’t theoretical. Millions of people are already using apps like ChatGPT to write books, create art, and develop code.
It’s going to be the “greatest force for economic empowerment” society has ever seen. It’s going to take away our jobs. It’s going to “generate a new form of human consciousness.” It’s going to kill us all.
Generative AI — or the new artificial intelligence that can create original content, including essays, fine art, and software code — is the talk of the town in Silicon Valley.
If you’re one of the over 100 million people who have used ChatGPT, or created a pop art-style illustrated portrait of yourself using Lensa, the popular image-generating app, you know what the latest version of this technology looks like in action.
Apps like ChatGPT, created by the Microsoft-backed startup OpenAI, are just the beginning of generative AI’s full range of capabilities, according to its boosters. Many believe it’s a once-in-a-lifetime technological breakthrough that could impact virtually every aspect of society and disrupt industries from medicine to law.
“AI had a ‘wow’ moment” in November with the release of ChatGPT, said Sandhya Venkatachalam, a partner at the prominent VC firm Khosla Ventures, which was an early investor in OpenAI. She compared recent advancements in generative AI to the creation of the internet itself.
“I think this is absolutely on the same order of magnitude. That’s a personal belief.”
For the past two decades, Silicon Valley has lacked a true technological breakthrough. In the ’80s, we had the advent of the personal computer; in the ’90s, the internet; and in the 2000s, the mobile phone and the suite of apps built on it. Since then, the tech world has been waiting for the next big invention (some are still bullish it could be Web3 or AR/VR). Now, many are seeing generative AI as a contender.
But people in Silicon Valley are prone to making grand proclamations about new technologies. If you’ve watched the rise and fall of crypto or heard grandiose plans about how we would all be living in the metaverse by now, you may be wondering: Is the excitement about generative AI just hype?
The answer is that while there’s plenty of inflated hype about generative AI, for many people, it’s much more real than Web3 or the metaverse has ever been. The key difference is that millions of people can — and already are — using generative AI to write books, create art, or develop code. ChatGPT is setting records for how quickly it’s been adopted by users — it took the app only five days to reach 1 million users (by contrast, it took Instagram 2.5 months and Twitter two years to hit the same milestone), according to a recent Morgan Stanley report. Even though it’s a nascent technology, almost anyone can quickly grasp the potential of generative AI technology with apps like ChatGPT, DALL-E, or Lensa. Which is why so many businesses, giant and small, are jumping to capitalize on it.
In just the past few months, we’ve already seen how generative AI is setting the business agenda for major tech companies. Google and Microsoft — which are fiercely competing with each other — are rolling out their own chatbots and baking generative AI into their core products like Gmail and Microsoft Word. That means billions of new users could soon be using the technology not just in one-off chatbot conversations, but in the apps we rely on every day to work and communicate with each other. Other major tech firms Meta, Snap, and Instacart are fast-tracking generative AI into their main apps, too.
It’s not just the tech giants. The buzz around generative AI has kick-started a new wave of investment into smaller startups at a time when money in Silicon Valley is more tight than it used to be: The North American tech industry overall saw a 63 percent drop in startup deals in the last quarter of 2022 compared to the year prior, according to Crunchbase News.
The most convincing evidence that generative AI is more than hype is that all kinds of people, including many who wouldn’t think of themselves as tech experts, are using ChatGPT for unexpected reasons. College students are using the technology to cheat on essay exams. Job seekers are using it to avoid the dreaded task of writing a cover letter. Media companies like BuzzFeed are using it to generate listicles and help with the reporting process.
“There used to be this question about ‘is this technology ready for building useful products for people?’” said OpenAI director of product and partnerships Peter Welinder. “What ChatGPT really showed is that people are using it for all sorts of use cases, and people in various professions are finding it useful in all parts of life.”
There are plenty of questions and concerns about the new technology. If left unchecked, generative AI could perpetuate harmful biases, enable scammers, spit out misinformation, cause job loss, and — some fear — even pose an existential threat to humanity.
Here’s what to make of all the excited, nervous buzz around generative AI.
Separating hype from reality
From VC cash to industry events to hacker houses filled with 20-somethings working on their next AI project, generative AI has sparked a frenzy in tech at a time when the industry needed some excitement.
In 2022, investors poured more than $2.6 billion into 110 deals toward generative AI startups — a record high for investment in the field, according to a recent report from business research firm CB Insights. Some of the biggest investments in this space have been from major tech companies: Microsoft invested $10 billion in OpenAI in January, and Google invested $300 million in the generative AI startup (and OpenAI competitor) Anthropic in February.
“We get one of these technology waves every 14 years,” said James Currier, co-founder and partner at technology venture capital firm NFX. Currier’s firm has invested in eight generative AI companies in the past several years, and he’s personally talked to around 100 generative AI startups in the past two months. “It’s going to change everything a little bit.”
But despite the increase in overall funding in this space, many generative AI startups are on tight budgets, and some don’t have any funding at all. Among the 250 generative AI companies the report identified, 33 percent have zero outside equity funding, and another 51 percent were Series A or earlier, which shows how young many of these companies are.
A big challenge facing these AI upstarts: The cost of training a single large AI model can be millions of dollars. Because of increasing volumes of data on the internet, the average cost of training the kinds of machine learning models that generative AI runs on could grow as large as $500 million to train a single model by 2030, according to a recent report by advanced AI research group EpochAI.
“We are not experts in training 200 billion-parameter models. It’s a sport of kings,” said Sridhar Ramaswamy, CEO of Neeva, an advertising-free search engine that recently launched an AI version of its product. “You need lots of money that we don’t have.” Instead, Ramaswamy said that startups like his can win by focusing on specific use cases — in his, search — but that before building a product, startups “need to figure out, ‘Is this a fad? Or is it creating unique user value?’”
None of these hurdles seems to be dampening the excitement surrounding the new AI and its potential. In recent months in San Francisco and Silicon Valley, there’s been a boom in generative AI meetups, co-working spaces, and conferences that feels like a return to the excitement of the mobile startup boom of the late aughts. In February, San Francisco hosted a generative AI focused hackathon, women in AI lunch, and “Building ChatGPT from scratch” workshop, among dozens of other AI-focused events. Young tech founders have nicknamed the San Francisco neighborhood Hayes Valley “Cerebral Valley” because of a sudden concentration of AI-related events and companies in the area.
“I’m very bullish on this whole AI wave because it feels like it’s at the level of the app store being released,” said Ivan Porollo, co-founder of the Cerebral Valley newsletter and AI community. Porollo is a tech entrepreneur who recently moved back to San Francisco. “It just feels different. It feels like a generation of technology that’s going to affect our future for the remainder of our lives.”
At a sold-out conference of over 1,000 people on Valentine’s Day in San Francisco that was hosted by Jasper, a startup that uses generative AI to create marketing copy, the atmosphere was charged with optimism and excitement. Attendees largely ignored the stunning waterfront views of the Bay Bridge as they stared at the stage, listening intently to executives speak from some of the top generative startups like OpenAI, Stability AI, and Anthropic.
“I think this is going to rewrite civilization,” said Nat Friedman, the former GitHub CEO turned investor, sitting cross-legged onstage for an interview. “Buckle up.”
Friedman was one of many speakers that day who were adamant that recent advancements in AI are revolutionary, even if they weren’t perfect yet.
Many of the founders I’ve been talking with at these generative AI events have promising ideas, like a platform for architects to generate designs based on written descriptions of the style of building they want to build, or an app that generates a daily email of all the top social media posts you want to read based on your interests. But most of their startups are still extremely early-stage, with either just an idea, or a rough demo, to show.
So far, one of the more developed use cases for generative AI is for creating marketing and other media content. Jasper is one of the biggest examples of that. The two-year-old company creates marketing copy like blog posts, sales emails, SEO keywords, and ads using AI. In 2021, the company said it made $35 million in revenue, and as of December, had close to 100,000 paying customers, including brands like Airbnb, IBM, and Harper Collins. In November, the company raised $125 million in funding at a $1.5 billion valuation. Jasper did not disclose its costs to Recode — so we don’t know if it’s making a profit.
Some media companies like BuzzFeed have also started using OpenAI to create personality quizzes and help staffers brainstorm. And open source generative AI firm Stability AI says it has paying clients in the film industry who use its software to autogenerate images.
But the bigger promise of generative AI is that it will change our world beyond writing ads. The tech’s biggest proponents hope it will transform fields like medicine and law by diagnosing disease or arguing cases in court better than humans can. Leading academic experts caution we’re very far from that, and some question if we’ll ever get there.
“I’m not convinced that some of the really fundamental problems with these [AI] systems, like their inability to tell if something is true or false ... I’m not sure that those things are going to be so easy to fix,” said Santa Fe Institute professor Melanie Mitchell, who specializes in AI and cognitive science. “I think these problems are going to turn out to be harder than some people think.”
Some regulators also have their doubts. The FTC recently published a blog post warning tech companies to “keep your AI claims in check,” and “not to overpromise what your algorithm or AI-based tool can deliver.”
“If you think you can get away with baseless claims that your product is AI-enabled, think again,” the post stated, echoing a critique of recent AI buzz that many companies are simply tacking “AI” onto whatever they’re doing just to capitalize on the hype.
AI hype has existed for a while. In 2019, a VC firm’s study found that 40 percent of European “AI startups” didn’t really use AI in their main businesses. Now, with the recent fanfare around generative AI in particular, some critics worry the AI buzz is mostly hype. It doesn’t help that some attempts by major companies to integrate AI have backfired, like Microsoft’s Bing AI chatbot giving unhinged responses to people, or tech publication CNET’s botched attempt to automate financial columns that ended up widely plagiarizing other people’s work and publishing misinformation.
I asked the venture capitalist James Currier whether he thought there was a risk in overhyping generative AI.
“I think this is the sort of cultural issue that people have with Silicon Valley, which is that we like drinking the Kool Aid,” he told me. “We should be drinking the Kool Aid and getting excited about stuff, and thinking hard about what we can create. Because at this point, the technology is just waiting for us to catch up to it.”
The limitations and dangers of generative AI
For all its potential, generative AI also has major limitations and poses serious risks. I would put those risks in three categories: making factual errors, promoting offensive content, and taking over human beings’ livelihood or autonomy. Now that major tech firms Google and Microsoft are in a race to beat each other at this technology, we’re seeing this tech rolled out to the masses while it still has problems.
To the first point, generative AI can get the facts wrong. A lot. Upon release, Microsoft’s version of ChatGPT, BingGPT — equipped with a freshly updated index of the entire internet — couldn’t tell you when the new Avatar movie would be playing near you (it recently insisted that Avatar 2 was not yet in theaters). And Google’s demo of its to-be-released chatbot, Bard, gave an incorrect answer about who invented the first telescope.
“These systems are extremely good at some things, but they often will make these weird, very un-human-like errors and really show that they are not thinking the way that humans think,” said Mitchell.
For the past several years, it was hard to gauge just how advanced generative AI was because much of its development was done in private. Google — which employs some of the world’s leading AI scientists — was long considered the industry leader in the field. But aside from research papers and some behind-the-scenes work, the public couldn’t really see Google’s generative AI capabilities.
Everything changed when OpenAI partnered with Microsoft to fast-track its own latest generative AI technology, ChatGPT, to the masses. Fanning the flames, Microsoft plugged into the underlying ChatGPT technology to build its own standalone “BingGPT” chatbot, challenging Google’s dominance in search and setting off a technological arms race.
“I hope that with our innovation, [Google] will definitely want to come out and show that they can dance,” Microsoft CEO Satya Nadella told The Verge last month. “And I want people to know that we made them dance, and I think that’ll be a great day.”
Google, under immense pressure to show its own generative AI capabilities, announced it will be releasing its own AI chatbot, Bard, in the coming weeks. The company says it has taken longer than some of its competitors to release generative AI technology publicly because it wants to make sure it’s doing so responsibly.
“The strategy we’ve chosen is to move relatively slowly in the space of a release in these models,” Douglas Eck, Google director of research on its AI-focused Brain team, recently told Recode. “I think history will tell if we’re doing the right thing.”
Google’s caution until this point is for good reason: If left unchecked, generative AI can do worse than just getting the facts wrong. The AI can reflect racist and sexist biases from the data it’s trained on, as seen with the image-generation app Lensa sexualizing its female avatars. On a macro level, it can create economic instability by replacing jobs at an unpredictable scale.
AI can also be intentionally misused. One recent example: A reporter used an audio generative AI tool to create a fake recording of his own voice, then called his bank and successfully hacked into his account using the recording. Another: Microsoft’s AI chatbot left New York Times reporter Kevin Roose “deeply unsettled” when, during the course of a lengthy philosophical conversation, the chatbot told Roose it wanted to be alive, professed its love for the reporter, and encouraged him to leave his wife.
The worry is that AI could be used to manipulate people’s emotions and sense of reality, whether that’s on purpose (like a scammer using AI to impersonate someone else) or through unintended behavior from the AI itself (such as in the case of BingGPT going “unhinged” with its emotionally loaded responses).
Going 10 steps further: Some of generative AI’s most ardent proponents also worry it can one day outsmart humans, posing an existential threat to humanity. OpenAI, which originally began as a nonprofit, was created in large part because of a fear of what’s called “AGI” — artificial general intelligence — which is the idea that AI will reach a general intelligence level that matches or surpasses human abilities.
When Sam Altman, the CEO of OpenAI, was asked at a recent tech event about the best and worst case scenario for AI, he said that the “the bad case — and I think this is important to say — is, like, lights out for all of us.”
Many preeminent scientists are still debating this idea.
“The development of full artificial intelligence could spell the end of the human race,” Stephen Hawking told the BBC in 2018. It’s an idea that may seem far-fetched but, as my colleague Kelsey Piper wrote, is increasingly plausible to the people who are actually building this technology.
“Since the beginning of AI, people have kind of fantasized about, ‘Will we have these robots that are like the ones in the movies that can really do everything a human can do and even more?’” said Mitchell. “But we don’t have a set of criteria that we can say, ‘Well, it’s achieved these 10 things, and we know it’s fully intelligent.’”
Although we might be far from the world of killer AI robots seeking revenge over their human overlords, the fact that creators of generative AI worry about its misuse is another reason we should take it seriously.
The major tech players in generative AI — big tech companies like Google, Microsoft, and Meta, as well as OpenAI — also have internal policies and teams weighing the harms of these products. But critics say that tech companies’ business interests can go against its ethical ones. Google shook up its ethical AI team in early 2021 after two of its leaders, Timnit Gebru and Margaret Mitchell, said they were pushed out over concerns that the company was censoring their critique of bias in large-language models.
Many — including some tech companies themselves — have called for outside regulators to step in with guardrails. While government has historically been slow to catch up to developing areas of technology, some states and cities have already passed legislation limiting certain kinds of AI, like facial recognition and policing algorithms. We might start seeing the same kind of patchwork regulation around generative AI.
In many ways, this new form of AI is easier to understand than other recent tech trends, like blockchain or the metaverse — which are very conceptual — because it’s tangible. You don’t need a $400 VR headset or a crypto wallet to see what generative AI can do. All you need is to load up a ChatGPT screen or type in some words that spit out art like DALL-E.
For better or worse, generative AI has major potential to reshape our concept of creativity, and the proof is in the products. Which is why I can say that it will probably be more than just a trend. If you don’t want to take my word for it, try it out yourself.
0 Comments