Google Duplex was going to change the world five years ago

The detailed working, limitations, and potential of Google’s AI system, Google Duplex, as well as advancements and potential concerns in AI technology, particularly Large Language Models.

Lucas A. Meyer


May 24, 2023

Google Duplex was going to change the world five years ago


In 2018, Google announced its artificial intelligence (AI) system called Google Duplex, which was designed to make phone calls on behalf of users for tasks such as booking haircut appointments or making restaurant reservations. The technology sparked both excitement and concern, as people worried about the potential implications of AI taking over jobs and being used for social engineering purposes.

The Reality of Google Duplex

As with many new technologies, the initial hype surrounding Google Duplex has given way to a more nuanced understanding of its capabilities and limitations. One user review revealed that the actual experience of using Duplex was less than ideal, as it took longer to set up the AI system to make a call than it would have taken to simply make the call themselves. Google later admitted that at least 25% of calls made by Duplex failed and required human intervention, highlighting the limitations of the technology. Furthermore, the COVID-19 pandemic has significantly reduced the need for booking services. Dropping my kid at a high school in 2022 quickly confirmed to me that haircuts were not a priority.

Duplex’s Integration and Current State

Google Duplex has since been integrated into Pixel smartphones and the Google Assistant, allowing users to access the AI system more easily. However, user reviews on platforms like Reddit have been mixed, with many expressing the sentiment that Duplex is “great when it works.” A few people have mentioned that they have had success using Duplex to make restaurant reservations, but not all restaurants accept it.

Large Language Models and the Future

When Duplex was demonstrated by Google’s CEO in 2018, it seemed ready to take over the world. It’s a good reminder that demonstrations are usually very scripted and tested, and that may give the impression that things work a lot better than they would work in the real world.

This is also a good reminder to take a lot of what you see about Large Language Models with a grain of salt. LLMs can do amazing things, but they are not perfect, and in many cases, they are still very imperfect. Emerging technologies like chaining and goal-seeking could significantly improve the capabilities of systems based in language models, allowing them to better understand and improve automatically, but most products I see on my day-to-day are just thin wrappers on top of an existing model like ChatGPT, with minimal engineering. I don’t think GPT is going to kill engineering. I’m entirely on the opposite side of the spectrum: I think it’s going to make engineering more important than ever. Somebody will need to integrate and monitor these models and their applications, and that’s not going to be a trivial task.


If you see the pace of AI technology releases releases and you get caught up in the hype, you may be worried like the people were worried about Duplex in 2018. Take your time to learn about new technologies. Be intentional: it’s definitely worthwhile understanding what LLMs are and are their advantages and limitations, but not necessarily optimal worrying about every new application, like AutoGPT. Remember the Vicki Boykis rule: whatever is important will be here six months from now, a lot will not.