DREAM BIG BUT SLEEP NEVER
ChatGPT, GPT3.5, GPT4 and other propriatery language models can't be trusted

ChatGPT & similar Language Models are not AI and can’t be trusted!

Let’s talk about language models and why they’re about as intelligent as a sack of potatoes.

Today we will expose these models for what they are: clueless, unreliable, and downright useless. But hey, at least they make for some good entertainment, right?

I know what you’re thinking: “But wait, aren’t language models supposed to be the pinnacle of artificial intelligence?” Ha! These so-called “AI” models are glorified parrots, spewing pre-fed data and regurgitating whatever their developers want them to say.

Why we need to stop calling Chatbots “AI”:

It’s time you stop using the term “AI” so loosely. As someone studying AI and language models, it pains me to see the misuse of such important terminology. Calling ChatGPT “AI” is like calling a calculator a mathematician. Let’s start using more precise terms, shall we?

Why I don’t trust Chat GPT and Proprietary Language Models

What exactly is ChatGPT-4, and why are so many people calling it “AI”? It’s all a bunch of smoke and mirrors. GPT-4 is simply a language model that generates text based on patterns in data that it has been trained on. It’s not sentient, not intelligent, and doesn’t possess any understanding of the world around us. Don’t be fooled by the hype. It’s not thinking but assuming the follow-up of a pattern.

Examples:

If you write “1, 2, 3, 4” in your prompt and ask the model for a solution, it will finish it with “5, 6, 7, 8”. It is finding the most likely value for continuing the pattern. If the model was pre-trained to give wrong answers, it will! This often happens by mistake because it’s hard to look through billions of words and figure out which of them in what order results in a wrong output. This could quickly happen if the developers wanted the language model to lie on purpose or promote a product over another better product.

An excellent illustration would be promoting political ideologies or enhancing thinking patterns as desired by the model’s developers or funders.

Language models are trained to predict the next word in a sequence based on patterns in the data they have been trained on. This is often accomplished by feeding the model examples of text and asking it to generate the next word or sentence. The model then adjusts its parameters to minimize the difference between its predictions and outcomes.

Why shouldn’t you blindly trust GPT4 or any other language model?

The problem is when language models are used in real-world applications like chatbots. Because language models rely solely on patterns in the training data, they may produce biased or inaccurate output if the data they were trained on is not diverse or representative enough. For example:

  • A language model trained on code written by one programming language may struggle to understand or generate code for a different language. For example, a language model trained on Python code may be unable to understand or generate C++ code.
  • A language model trained on outdated programming concepts or techniques may produce obsolete or ineffective output. For instance, a language model trained on programming methods from the 1990s may not be helpful for modern software development practices.
  • A language model trained on data subject to rapid changes may quickly become outdated or provide inaccurate predictions. For example, a language model trained on stock market data may not perform well during a financial crisis.

In each of these cases, the quality of the language model’s output is heavily influenced by the quality of the data it has been trained on. This underscores the importance of using diverse and representative datasets when training language models for real-world applications.

The Limitations of Closed Source Language Models

Just like humans need oxygen to breathe, AI development needs transparency to thrive. Without it, we’re all just flailing about in the dark (and trust me, I’ve seen some dark corners of the internet). Closed-source proprietary language models are like those who wear sunglasses inside – they think they look cool, but really they’re hiding something.

Only the developers get to be part of the fantastic kids club and know what’s happening under the hood. In the face of OpenAI, it looks like they’re probably just playing beer pong instead of doing anything remotely productive.

Attempting to impede others from creating their open-source language models and gaining a monopoly is a concerning practice. It is worth noting that the OpenAI initiative began as a non-profit, open-source project funded by the public to be “by the people, for the people.” However, they unexpectedly terminated public access to the project, making some feel the funds were misappropriated.

Potential for Misuse: When AI Goes Rogue

Closed-source Language models that are proprietary can be likened to enigmatic strangers at a social gathering – they’re often shrouded in mystery, with mysterious motives and questionable behavior. I’m hoping that more transparent and ethical AI development practices will become the norm so we can avoid these pitfalls. Given present realities, such a desirable outcome may not be easily achievable.

There's also a Telegram channel

Leave a Comment

Your email address will not be published. Required fields are marked *