Published Mar 16, 2023 by Xiph
Unless you’ve been living under a rock, chances are you’ve heard of ChatGPT – the powerful new artificial intelligence (AI) chatbot from OpenAI. It’s taking the internet by storm with its eerily human ways, and knack for bad poems and terrible jokes, but how does it work? And more importantly, can we rely on it? Here’s everything you need to know.
What is ChatGPT?
ChatGPT (short for Chat Generative Pre-Trained Transformer) is an AI chatbot − billed as more advanced than Google. It can have human-like conversations and answer user queries like a human beyond simple facts, and with emotion and tact. It can even sound downright authoritative. The chatbot uses natural language processing (NLP) to source information from masses of data and knowledge to respond to human queries, write in the style asked for, answer questions, and solve problems.
Is ChatGPT smarter than humans?
ChatGPT has passed the bar and even medical exams, thanks to the abundance of training data it can draw from. This has put schools and universities on high alert and raised concerns within the education sector, especially around plagiarism and the spreading of misinformation. Some institutions have moved to ban the use of ChatGPT altogether. The chatbot is not omniscient or smart enough to replace all humans yet, but it has a creative flair. It can write song lyrics, poems, academic essays, wedding speeches, eulogies, TV scripts, job advertisements, computer code, and more.
When was ChatGPT released?
ChatGPT was developed by OpenAI originally as an artificial intelligence chatbot for online customer care. It was launched as a prototype to the public in November 2022 and has since been fine-tuned on top of GPT-3.5 which uses supervised and reinforcement learning techniques. Microsoft has invested $US10 billion ($14 billion) in OpenAI in a bid to shore up market share in the AI space and compete with Google, Amazon, and Meta Platforms (Facebook) on advanced artificial intelligence systems.
Is ChatGPT the new Google?
ChatGPT is not the new Google, and won’t replace the search engine for several reasons. Firstly, a search engine uses web crawlers, page ranking, and featured snippets to answer user queries in whatever format they’re input (i.e. single word, sentence, question, etc.) and then turns up multiple search results in digestible formats to answer that search intent. On the other hand, chatbots are designed to respond to natural language queries. This means they may not recognise single-word queries like ‘VPN’ and may not understand that the intent behind this query is to buy a VPN or compare products.
Secondly, the ChatGPT model is purely textual and doesn’t provide URLs, images, videos, or any other input or output mode. However, it can output computer code, meaning you can use it to build a website or landing pages in minutes.
Thirdly, ChatGPT has uneven factual accuracy, can’t provide real-time information (lacks information on more recent events), or location-based information, and doesn’t have access to the same breadth and depth of information as other search engines. The credibility of chatbots like ChatGPT is also not at a stage where it can be fully trusted over Google Search, but it could change how we find information online. Although, Google is worried enough that it launched its rival AI chatbot, Bard, which will be powered by the company’s large language model LaMDA (Language Model For Dialogue Applications).
The limitations of ChatGPT
Like other language models like it, ChatGPT has notable limitations, firstly that it has no real ‘intelligence’ – it’s simply been programmed to generate text based on data and patterns in large volumes and to predict how words are used in any given sequence. In other words, it scans internet archives and formulates answers based on the likelihood of accuracy and proximity of concepts to relevant information. It can generate words, sentences, and paragraphs based on a given input, but it doesn’t truly comprehend what those words mean. Since the GPTs are trained through a method of trial-and-error, they are only as accurate as the data and algorithms they are based on and those inaccuracies are aplenty since the machine is not trained on a single source of truth.
ChatGPT also sometimes writes plausible-sounding, but incorrect or nonsensical answers. This behaviour is common to large language models and is called artificial intelligence hallucination. For example, if you asked a hallucinating chatbot (with no knowledge of Tesla's revenue) what the company’s earnings are, it would internally pick a random figure such as $14.3 billion that the chatbot deems plausible, and then would falsely and repeatedly insist that Tesla's revenue is $14.3 billion, with no sign of internal awareness that the figure is a product of its own imagination and completely false.
How does natural language processing work?
Large language models (LLMs) generate text based on patterns and correlations they learn from a large dataset. They use autocomplete-like programs and deep learning algorithms to analyse the nuances of human language and then process, understand, and output as close to human language as possible. They use massive datasets and the statistical properties of a language to make educated guesses based on the words typed previously.
Google’s Bard vs Bing AI
Leveraging its partnership with OpenAI, Microsoft launched Bing AI underpinned by the same AI technology as ChatGPT. The company was able to preview a new kind of internet search – one that lets you converse with an eerily human-like chatbot rather than just be presented with a list of web pages. However, the chatbot seems slightly unhinged and has shown signs of what can only be described as gaslighting, narcissism, and micro-aggressions to certain user queries which question its legitimacy. Despite this, Bing says its next-generation OpenAI model is more powerful than ChatGPT. This could be the long-rumoured GPT-4 successor to the GPT-3.5 model that currently powers ChatGPT.
On the other hand, Google’s Bard is an ‘experimental conversational AI service’ which uses a version of LaMDA. This will be launched to testers for fine-tuning before it’s rolled out to the public. Bard will use resources from the internet to deliver up-to-date and insightful responses which will give it more current knowledge of recent facts, events, and figures, something that’s lacking in the ChatGPT models.
All in all, Google and Bing’s chatbot-based search will likely work in similar ways. They will provide longer and more contextual answers to more open-ended questions in addition to traditional search results.
Keep an eye on AI
The AI chatbot revolution is here, but we’re yet to truly understand what impact it will have in areas like education and employment. What we know is that it will drastically change how we browse the internet and interact with search engines. For more information about ChatGPT and other AI resources, contact us via email: [email protected].
Posted in: Security