Cakes' Lost Notes

Are we... using AI... Wrong?

Yes. Yes we are.

AI, or more specifically, generative AI has quite literally been taking over the average household.

Most innovative tech things tend to do this. Think the first smartphone, wireless earbuds, smart home assistants like Alexa. They all catch on quickly and integrate themselves into our lives so seamlessly to the point where we forget a past without them.

Now this isn't always a bad thing (though it often does lead to that road). Smartphones really did improve our lives and helped the world be more connected. It's a marvel of human innovation, but it is also one that is now unfortunately been tainted by corporate greed.

At what point does innovation become something we should fear?

Generative AI is a little bit different compared to the smartphone. For one, the stench of corporate greed was more prominent even during the first year of its mainstream launch.

In 2022 ChatGPT became a household name, and almost within weeks we started hearing news about stolen content being used to train these models. Thousands of creators from multiple different spaces were being exploited without their knowledge; and this isn't even mentioning the environmental effects of this technology.

Whilst all of these negatives may leave a sour taste in your mouth, there is a way in which we can take control of this technology.

First and foremost however, all companies that develop LLMs (large language models) should credit the creators who's work they use for their projects. This is the one point I want to emphasize and is less about ethics and more about a moral right.

With that out of the way, next I want to talk about how these tools are marketed. Generative AI is increasingly being marketed less as a tool and more as a solution to problems that don't exist.

Minor inconveniences and experiences are NOT problems to be solved.

What I mean by that is, you don't need an AI to tell you what to make for lunch today, or to text your friend, or to write that book report, or summaries that article.

Engage with the tasks you do. Be mindful in day-to-day activities. Every shortcut you take in this regard takes away from the value you're getting out of life. This is not what you should use AI for.

So what SHOULD I use AI for then?

Honestly? I can't give you a comprehensive answer for that. I can give you an example, but in general you should use it in a way that does not take away from the experiences of everyday life.

I've talked previously about my AI personal assistant. What that really is, is just a language model that detects what I want, turns it into a command, and performs that task. How is this useful you may ask?

Instead of remembering rigid commands for every task, I can just naturally ask for it in the form of a text. The "AI" part is only used to understand the string of words I say and turn it into a command that my code can understand. From there, the rest is just simple old fashioned programming. It also helps to have a centralized hub to perform most tasks instead of opening multiple different applications.

The above example, while a shortcut, is not cutting down on a valuable experience. Remembering rigid commands word for word is not a useful skill nor an important experience. Going through computer applications is also not a thing that adds value to my life. These are the things you need shortcuts for. And it is perfectly fine to be using tools like AI for these tasks.

It's also important to stay educated. Know the impact of AI, and know what you can do to reduce it. Do as much good as you can for the world and ask if it was worth it later.


I'll end this off by saying that as much as I've thought about the uses of AI, I haven't come up with a satisfying enough reason for generative AI (which is the most popular and the one I mentioned most prominently in this article). For now I will just continue to be weary of it.

If you learned something, do leave a toast -Cakes <3

#Advice? #Rant #Tech