Quantcast
au iconAU

 

 

Is ChatGPT worth the hype?

As the AI revolution gathers pace, we will need to balance its enormous potential with a new generation of potential harms built around its power to predict, persuade and mimic. Meet Chat Generative Pre-trained Transformer, the advanced language model that can help you automate invoicing or make embarrassing mistakes.

Is ChatGPT worth the hype?
smsfadviser logo
A textured illustration of a small robot inside the screen of an open laptop, with a speech bubble comig out of the robot's head. The speech bubble contains only an ellipsis.

ChatGPT fervour has few rivals among other cultural phenomena. Since its release to the public in November 2022, it set a new record for fastest application to reach 100 million users. And those users, including reviewers and detractors, have collectively produced 239 million articles and posts about the disruptive language-learning technology. Unofficially, the chatbot is bigger than Elvis.

The bona fide user and media frenzy has been fuelled by familiar economic tropes. The first is our beliefs that tech innovation is inherently progressive, the second that technology fuels indefinite productivity growth. However, there is growing evidence that we need to lace our wide-eyed hope with a healthy dose of systems thinking and some unsettling, but widely available data.

According to the McKinsey Global Institute, the current disruption caused by new exponential technologies is 300 times larger in scale and 10 times faster than the disruption caused by technologies during the Industrial Revolution in the 18th century. The impact of these technologies on society is estimated to be 3,000 times greater than in the past.

Even so, their impact on economic growth and productivity has been negligible for well over a decade. This is not least because companies like Google or Microsoft, which has invested US$13 billion in Open AI, primarily innovate for profit (rather than for productivity). Microsoft is entitled to 75 per cent of the profits generated by Open AI’s products.

As the recent media hype around ChatGPT attests, the tech industry’s ability to generate interest in its profit-making ventures is unparalleled. It uses the very technologies it sells to understand how to tap into human curiosity and distract us from critically assessing its impact.

The ubiquitous ChatGPT will, for instance, deny any knowledge of a marketing ploy to make it famous. When it spews out false information or miscalculates your car loan repayments, as it may, its digital poker face will remain unchanged. If caught, it will apologise profusely, eliciting sympathy. This could be the reason Open AI used the marketing slogan ‘Too dangerous to release’ to launch its predecessor GPT-2.

 

How ChatGPT works

GPT-3, on which ChatGPT is built, is trained to generate human-like responses to users’ prompts by analysing massive amounts of text from books and the internet. Much like an autocomplete function, it predicts subsequent words in a response by predicting the likelihood of certain words appearing after others, which it has internalised through its training.

For instance, when asked about the ways to restrict financial losses in a company, it quickly produces blocks of text discussing budgeting, forecasting and auditing. It does so not because it understands the question, but because it has acquired knowledge that certain terms like 'budget', 'forecast' and 'audit' are likely to be used with each other, and with other words such as 'financial losses' or 'business enterprise'.

Ask ChatGPT a conceptual question and it gives you sentences that may be well phrased, but with questionable factual accuracy. The system is trained to model language, not develop knowledge. Put simply, ChatGPT does not understand the concepts about which it generates linguistic responses, but it can sound like it does.

An accountant’s clients may not notice or care that a newsletter is written by a deepfake Dostoyevsky. Instead, they will care if their automated invoices are correct and timely. They will care if the accountant’s communication is clear and accurate. And alongside fairly rudimentary automations, accountants need the ability to trust AI to accurately analyse large chunks of data, make financial predictions or identify trends.

A recent survey by the University of Queensland and KPMG revealed that only 40 per cent of Australians believe that AI can be trusted to help us with important work. Only a quarter believe the technology will create more jobs and trust. And one-third of the surveyed individuals lack confidence in the ability of government, technology and commercial entities to develop, use and regulate AI for the benefit of society.

The mistrust is likely to wane as more users start to engage with the chatty bot. Our brains’ critical design flaw is to anthropomorphise technology just like we do with anything else we create – from stuffed animals to nicknamed cars. Nowhere is this more pronounced than in the way we project human features and hopes on all things artificial and supposedly intelligent.

If you ask ChatGPT to describe itself, it will come up with ‘an engaging and convenient way for people to interact with technology’. It is not a search engine or a junior analyst who is having a real conversation with us. But our interaction with it might fool us into thinking so, as our desire for human-like interaction can trigger an emotional response. Relationships, after all, are the most transformative technology we have.

Coincidentally, the first iteration of GPT was released on Valentine’s Day 2019. Its successor, ChatGPT, carries the promise of a perfect companion – part pet, part servant, part a secret weapon – who happens to be a better conversationalist than your family dog. This scenario has all the ingredients of a great and potentially disastrous love story.

 

Chat GPT and productivity

Tristan Harris and Aza Raskin, founders of the Centre for Humane Technology, cite a core problem of humanity identified by biologist EO Wilson – the combination of ‘palaeolithic emotions, medieval institutions and God-like technology’ as opening a gap between the complex issues technology is creating and our ability to deal with them.

Technology, Harris and Raskin believe, is making us less – not more – productive and able to deal with reality. Synthetic relationships have already begun to transform us due to AI’s ability to mimic and persuade, and our human inability to switch off emotions.

The outcome of the AI-human partnership will be partially determined by our starting ideological position – be it an optimistic or a pessimistic one. Do we believe AI business models can be trusted to close the complexity gap created by tech-innovation? Can we curtail AI’s gigantic CO2 emissions? Should societal well-being take precedence over technology-driven productivity gains, which have been at their lowest level since the 18th century in some OECD countries? And, last, but not least, can we trust AI to shape our culture?

The paradox is not lost on Nick Cave, who was prompted to chime in and assess AI’s songwriting credentials. Commenting on a ChatGPT-written song that had been sent to him by a fan, which was meant to emulate his writing style, the artist was unequivocal: “[T]his song is bullshit, a grotesque mockery of what it is to be human.” His verdict reverberated – a chorus against algorithmic awe. “The apocalypse is well on its way,” he lamented.

Academics, meanwhile, hold varied opinions. Despite obvious cause for concern that ChatGPT will join the ranks of tools used for contract cheating, many are underwhelmed by the quality of the work it produces. Others see an important role in preparing students to use generative AI as a collaborative tool, and discuss adjusting their teaching and assessment processes to support that outcome.

So far, the revolutionary cyber scribe sounds more like a mega-fibber that generates human-sounding text than a Noam Chomsky in a cyberchat. Chomsky himself called the chatbot a high-tech plagiarist, questioning the ability of labour-saving technologies to replace people.

 

ChatGPT, sustainability and regulation

A bigger elephant in the ChatGPT room is sustainability – how power-intensive the AI systems are. A Karma Metrix analysis by Chris Pointon estimates that ChatGPT could emit about 3.8 tons of carbon dioxide equivalent (CO2e) every single day. ChatGPT training alone resulted in emissions equivalent to a 700,000 kilometre car-ride.

The traffic from AI’s exponentially growing user base can only exacerbate, rather than help us solve climate problems. The Internet, as Pointon notes, is the largest coal-fired highway on the planet. So long as our human ecosystem is worth less to us than AI-driven entertainment, profit or productivity, prospects for improvement are going to remain grim.

So, what kind of economy can be built on the business-AI machine alliance? For starters, it might not be a sustainable one. Over the next few years, there are going to be thousands of start-ups trying to profit from chatbots and actively shaping humanity’s socio-economic transformation. In some cases, AI agents might become primary economic drivers, a business panacea.

Either way, regulation will need to match the speed with which technology is moving. The question of who will regulate AI is a pressing one, as emerging AI models will need to adhere to international principles of economic and climate justice. Yet most of the AI regulation is likely to be sovereigntist in practice.

So far, only China and Europe have been hard at work trying to rein in artificial intelligence by tightening their grip on the technology sector and the way its products are used. Other countries have been less resolute.

Australia does not have specific laws regulating AI save for the non-binding AI Ethics Principles and the Privacy Act 1988, which regulates the AI-led collection of biometric data. While the Government’s response to the Privacy Act Review Report is set to modernise Australian privacy laws, one thing is certain: a new front will likely emerge around who will set the standards and guidelines for AI.

For now, we’d be wise to at least refrain from pronouncing ChatGPT’s name in French, as its very transcription – ‘sha-jeu-peh-teh’ – translates to a serious threat: “Cat, I farted”.

 

The Attorney-General’s Department is now seeking feedback on the Government response to the Privacy Act Review Report from the public as well as from public and private entities.

The report’s 116 proposals, aimed at modernising Australian privacy law, raise complex policy issues.

Complete the survey or provide a written submission by 31 March to help influence reforms.

 

Subscribe to Public Accountant

Receive the latest news, opinion and features directly to your inbox