FF Life: GPT-4 and ChatAI could be your digital genie

They could be your personal librarian and a digital assistant that can translate, summarise, and even help you learn a new language

Charles Assisi

Pretty much all of this edition of the newsletter has been generated using GPT-4. And that includes the images used. All that we’ve done is to think up the questions that may occur to people who are unfamiliar with the technology, and input these questions into the piece of AI that’s now all over. The questions, in technical terms, are called ‘prompts’. That took some doing.

On our part, we set the output to ‘precise’ and insisted that the language be accessible and slightly humorous. To ensure the software isn’t making things up, we insisted it provide citations. We cross-checked parts where citations were available. Some minor edits later, this newsletter was ready for dispatch.

If technology such as this can write, should we be worried? The way we look at it is that this can be deployed as a powerful ally to amplify what we can do and must do. Ignoring it is living in denial.  

GPT-4 and ChatAI: Your friendly neighbourhood tech wizards

Imagine if you had a magical genie, like Aladdin's, but instead of granting wishes, it answered questions. Questions like, "What's the capital of France?" or "How do I bake a chocolate cake?" or even "What's the meaning of life?" (Okay, maybe not the last one, but you get the idea). That's what GPT-4 and ChatAI are like—a digital genie at your fingertips!

Wait a second: So what’s the difference between a search engine such as Google and this ChatAI we’re talking of here?

A regular search engine is like a vast, sprawling library with millions of books. When you ask a question, the librarian (the search engine) hands you a stack of books that might contain the answer. It's up to you to flip through the pages to find exactly what you're looking for. Sometimes it's quick, but other times you might find yourself lost in a sea of information, not all of it relevant.

ChatAI, on the other hand, is like having a personal librarian who not only fetches the books but also reads them for you and gives you a summary of the exact information you need. It's a conversation, where you can ask follow-up questions, clarify doubts, and even ask for opinions. Instead of giving you everything that might be related, ChatAI tries to understand what you're really after and provides a more focused and human-like interaction.

For example, if you ask a search engine, "How do I fix a leaky faucet?" you'll get a list of articles, videos, and forums. With ChatAI, you'd get a step-by-step guide, and you could ask follow-up questions like, "What if I don't have a wrench?" or "What's plumber's tape?" It's like having a handyman over your shoulder, guiding you through the process.

In essence, while a search engine provides a list of places where you might find the answers, ChatAI aims to be the answer itself, making the process much more interactive and personal.

And what on Earth is GPT-4?

GPT-4, or "Generative Pre-trained Transformer 4" (sounds fancy, right?), is like the brainy kid in class who knows a bit about everything. But instead of being annoying, it's incredibly helpful. It's a computer program designed by OpenAI that can understand and generate human-like text. Think of it as a super-smart parrot that doesn't just mimic words but understands and responds to them.

Now, if GPT-4 is the brainy kid, ChatAI is the chatty sibling. It's a tool that uses GPT-4 to have conversations with you. Imagine having a chat with Sherlock Holmes, Albert Einstein, and your favourite comedian all rolled into one—that's ChatAI for you!

How does ChatAI make life easier? 

Ever had those moments when you're trying to remember a recipe, or need quick advice on a DIY project, or just want to know if penguins have knees? (Spoiler: They do!) Instead of scrolling through pages of search results or waiting for that one friend to text back, you can just ask ChatAI. It's like having a personal assistant who's always awake (and doesn't run on coffee).

Got it. Now, I’ve heard these techies throw around terms such as LLM and DALL·E. Tell me more about it. What is it really?

Speaking of magic, ever heard of LLM? It stands for "Language Model," and it's the tech wizardry behind GPT-4. And then there's DALL·E, not to be confused with the cute robot from that movie. DALL·E is an artist—a digital Picasso, if you will. It can create images from descriptions. Tell it to draw a "two-headed flamingo wearing sunglasses," and voila! You've got yourself a masterpiece.

How does Dall-E create images?

Imagine you're a chef in a kitchen full of ingredients, and someone asks you to make a dish they just invented in their head. You listen to their description and use your skills and the ingredients at your disposal to whip up the dish. That's kind of what DALL·E does, but with images.

DALL·E is an AI program created by OpenAI, and it's like a chef for digital art. When you give it a text prompt, DALL·E uses its 'recipe book'—a vast database of images and their descriptions—to understand what you're asking for. It then 'mixes' different elements it has learned during its training to create a brand new image that matches your description.

Here's the technical kitchen magic: DALL·E uses a type of AI called a neural network, which is a bit like a network of 'neurons' similar to the human brain. This network has been 'trained' by looking at millions of images and their descriptions, learning patterns and associations between them. When you give DALL·E a prompt, it uses these patterns to generate a new image that combines elements in a way that makes sense, based on what it has learned.

So, if you ask DALL·E for "a two-headed flamingo wearing sunglasses," it knows what flamingos look like, understands the concept of 'two-headed,' and knows what sunglasses are. It then creates an image that combines all these elements in a coherent way, often with a touch of creativity that can be surprisingly human-like.

In short, DALL·E creates images by drawing on a vast knowledge of visual concepts to bring text descriptions to life, much like a chef creates a dish by combining ingredients in just the right way.

Then there are plug-ins available to go with all this? What are these about? 

Think of OpenAI's technology as a smartphone. It's already a powerful device, but when you start adding apps (or in this case, plug-ins), it can do so much more.

Plug-ins are additional tools or applications that can be integrated with OpenAI's core services to enhance their capabilities or add new ones. They're like the Swiss Army knife attachments that turn a simple blade into a multi-tool.

Here are a few examples of plug-ins that can be clubbed with OpenAI:

  • Translation Plug-ins: These are like the polyglot friends who can translate your thoughts into any language. You could type something in English, and the plug-in would help OpenAI's models translate it into Spanish, Mandarin, or even Dothraki if you're a "Game of Thrones" fan.
  • Summarisation Plug-ins: Imagine you have a friend who reads every book in the library and then tells you the gist of each one over coffee. Summarisation plug-ins can take long pieces of text and condense them into shorter versions, preserving the main points and themes.
  • Search Plug-ins: These are like digital bloodhounds that can sniff out the information you need from the internet. Instead of just chatting, these plug-ins allow OpenAI's models to fetch and incorporate real-time data from the web into their responses.
  • Image Processing Plug-ins: Think of these like a pair of high-tech glasses that can look at a photo and describe what's in it. They can help OpenAI's models understand and generate descriptions of images, or even create images based on descriptions.
  • Voice Recognition Plug-ins: These give OpenAI's models ears. With voice recognition, you can talk to the AI, and it can transcribe your speech into text, understand it, and respond accordingly.
  • Educational Plug-ins: These are like private tutors that can help with homework or learning new subjects. They can provide explanations, solve maths problems, or help you learn a new language.
  • Productivity Plug-ins: These are your digital personal assistants. They can help schedule your appointments, remind you of tasks, and even automate some of your routine jobs.

By using these plug-ins, OpenAI's models can become even more powerful and personalised, turning them into a sort of digital Swiss Army knife for information and tasks.

What do we make of criticism that ChatGPT hallucinates? And that it can make up answers that are totally off? Why does this happen and how do you deal with it?

The criticism that ChatGPT "hallucinates" refers to instances where the AI confidently generates responses that are either factually incorrect or nonsensical. This is a known limitation of language models like ChatGPT, and it happens for a few reasons:

  • Lack of Understanding: While ChatGPT can process and generate language, it doesn't "understand" content in the way humans do. It recognises patterns in data but doesn't have the ability to verify the truth or apply common sense in the same manner a person would.
  • Training Data: ChatGPT is trained on a diverse dataset from the internet, which includes both accurate and inaccurate information. If the model encounters enough incorrect data presented in a convincing way, it may reproduce those inaccuracies.
  • Interpolation and Extrapolation: The model tries to fill in gaps in its knowledge by interpolating from what it has learned. Sometimes, this leads to plausible-sounding but incorrect information. Other times, it may extrapolate beyond its training data in ways that don't make sense.
  • Overconfidence: AI doesn't have self-doubt or scepticism. It doesn't second-guess its responses, which can lead to presenting incorrect information with unwarranted confidence.

How to deal with It?

  • Critical Evaluation: Users should critically evaluate the information provided by ChatGPT, especially for important or sensitive topics. It's always a good idea to verify AI-generated information with reliable sources.
  • Feedback Loops: Providing feedback to the AI developers can help improve the model. When incorrect responses are identified and reported, they can be used to fine-tune the AI's algorithms.
  • Limitations Disclosure: Being transparent about the AI's limitations helps set realistic expectations for users. It's important to communicate that while ChatGPT can be a powerful tool, it's not infallible.
  • Supplementary Tools: Using additional tools like fact-checking plug-ins or integrating databases with verified information can help reduce the occurrence of "hallucinations."
  • User Education: Educating users on how AI works and its potential pitfalls can encourage more responsible use and a better understanding of when and how to trust AI-generated content.

In summary, while ChatGPT and similar models are impressive in their capabilities, they are not perfect. Recognising their limitations, using them as one of many tools, and maintaining a healthy scepticism can help mitigate the impact of their "hallucinations."

The competition: Google's Bard and more

Of course, every superhero has a friendly rival. For GPT-4 and ChatAI, it's Google's Bard. Bard is Google's attempt at creating a similar chatbot. While GPT-4 is like the wise old owl, Bard is the sprightly young falcon. Both have their strengths, and the tech world is eagerly watching this friendly duel.

Should AI be regulated?

The question of whether AI should be regulated is akin to asking if a bustling city should have traffic laws. Just as roads without rules could lead to chaos and accidents, unregulated AI has the potential for misuse, unintended consequences, and ethical quandaries.

Here are some reasons why regulation is considered important:

  • Privacy: AI systems often process vast amounts of personal data. Without regulation, there's a risk of this information being misused, leading to privacy violations.
  • Bias and Fairness: AI can perpetuate and amplify biases present in their training data. Regulations could help ensure that AI systems are fair and do not discriminate against any individual or group.
  • Safety and Security: As AI becomes more integrated into critical systems like healthcare, transportation, and finance, ensuring these systems are safe and secure is paramount. Regulations can set standards to prevent failures that could have catastrophic consequences.
  • Accountability: When AI systems make decisions, it's important to have clear lines of accountability, especially when those decisions have significant impacts on people's lives. Regulation can help clarify who is responsible when things go wrong.
  • Transparency: Understanding how AI systems make decisions can be crucial, especially for those affected by these decisions. Regulations may require AI systems to be transparent in their decision-making processes.
  • Ethical Use: AI has the potential to be used in ways that could harm society, such as deepfakes or autonomous weapons. Regulation can help prevent such harmful uses.

However, the challenge with AI regulation is to balance the need for safety, privacy, and accountability with the need for innovation. Overly stringent regulations could stifle the development of new AI technologies, while too little could lead to the issues mentioned above. It's like finding the right speed limit for our city roads; too slow and you hinder the flow of traffic, too fast and you increase the risk of accidents.

The current approach in many regions is to develop guidelines and frameworks that can evolve with the technology. This includes principles of ethical AI, impact assessments, and sector-specific regulations. The goal is to create a regulatory environment that is both flexible and robust, ensuring that AI benefits society while minimising potential harms.

Was this article useful? Sign up for our daily newsletter below

Comments

Login to comment

About the author

Charles Assisi
Charles Assisi

Co-founder and Director

Founding Fuel

Charles Assisi is an award-winning journalist with two decades of experience to back him. He is co-founder and director at Founding Fuel, and co-author of the book The Aadhaar Effect. He is a columnist for Hindustan Times, one of India's most influential English newspaper. He is vocal in his views on journalism and what shape it ought to take in India. He speaks on the theme at various forums and is often invited by various organizations to teach their teams how to write.

In his last assignment, he wore two hats: That of Managing Editor at Forbes India and Editor at ForbesLife India. As part of the leadership team, his mandate was to create a distinctive business title in a market many thought was saturated. When Forbes India was finally launched after much brainstorming and thinking through, it broke through the ranks and got to be recognized as the most influential business magazine in the country. He did much the same thing with ForbesLife India where he broke from convention and launched the title to critical acclaim.

Before that, he was National Technology Editor and National Business Editor at the Times of India, during the great newspaper wars of 2005. He was part of the team that ensured Times of India maintained top dog status in Mumbai on the face of assaults by DNA and Hindustan Times.

His first big gig came in his late twenties when German media house Vogel Burda marked its India debut with CHIP a wildly popular technology magazine. He was appointed Editor and given a free run to create what he wanted. During this stint, he worked and interacted with all of Vogel Burda's various newsrooms across Europe and Asia.

Charles holds a Masters in Economics from Mumbai Universtity and an MBA in Finance. Along the way he earned the Madhu Valluri Award for Excellence in Journalism and the Polestar Award for Excellence in Business Journalism.

In his spare time, he reads voraciously across the board, but is biased towards psychology and the social sciences. He dabbles in various things that catch his fancy at various points. But as fancies go, many evaporate as often as they fall on him.

Also by me

You might also like