Four Factors of AI that will shape the next decade
The age of AI is in full swing, but to what extent? And what exactly dictates the impact of AI and its reach on our society?
Today, I wanted to dig into four main factors I feel will determine the scope of AI and its lasting effect on our lives.
Now obviously, a myriad of reasons can determine how this technology can shape the next decade, but I’ll talk about just four topics from a high level to give a general overview.
1. Governmental AI policies
There is a clear fear across many societies about the power of AI and its ability to replace basic human tasks. LLMs can create text-based sales pitches, essays, summarizations, and stories. It can also perform classification tasks, debug code, generate images (soon probably videos too), and synthesize data.
These concerns have reached most governments, who responded by invoking various “AI Acts” and executive orders that were meant to establish working policies to address how far AI should control our society.
One famous example thus far is the AI Act invoked by the EU. These types of acts aim to have regulations that draw boundaries of what AI should and should not do. Major top-of-mind concerns include AI bias, privacy, data transparency, over-reliance, national security, and of course, job security.
Biden’s AI Executive Order is another prime example of what we can expect the Federal government to support on regarding AI, and what to watch out for. It’s clear that innovation and competition are welcomed, but with a need to follow standards on AI safety, security, equity, privacy, and civil rights. I recommend staying up-to-date with these latest government acts, as federal support from wherever you live can dictate the boundaries of AI.
Because of this potential impact, I believe everyone will observe a battle between venture capitalists and investors — those who think like Marc Andreessen (how “AI will save the world”), and staunch defendants of heavy regulation. Regulators could vouch for transparency requirements, ensuring the safety and fundamental rights of people, and the prevention of extensive job loss thanks to AI. Even radical regulators or protesters may become popular thanks to the expansive growth of AI; these groups would look at AI and its applications with immense disapproval, believing it would lead to the end of humanity.
2. AI content detection
An entire market designed to detect AI-generated content will emerge, and to be honest, it already has. Text-based content detectors such as writer.com and GPTZero are already out there, monetizing API and web-based services to teachers, students, and writers.
Despite the availability of these tools, it’s still not easy to detect AI-generated content. Generative AI has grown sophisticated; both zero-shot learning and multi-shot learning (zero-shot means the LLM can give quick responses without any extra training, while multi-shot means the model has been fine-tuned slightly for more accurate responses) have been growing stronger in precision with greater use cases for each. Because LLM-based products these days cater to both, it’s becoming harder and harder to detect AI-generated text.
Of course, text is the least of our problems — text-to-image (Generative Adversarial Networks — AKA GAN models) can generate crazy images that look real or digitally drawn. The amount of images, videos, and other multi-dimensional pieces of work that have already skyrocketed to popularity across the internet has been alarming. Just go check out the Microsoft Bing Image Creator, insert a prompt, and watch as the model does its work. Most can pass quick eye tests! There are still problems with perfecting image generation, but you see where I’m going with this: how can we differentiate generated images from ones produced by hard-working artists?
A market is currently already brewing: a whole industry dedicated to detecting even the smallest bits of generated content. We have to take responsibility for AI generation and protect our artists and producers by calling out what’s AI and what’s not.
3. GPU pricing war
The amount of processing/computing power it took for all AI players in our current market to train their LLMs has been staggering.
Just look at Meta’s LLaMA model — based on this CNBC article by Jonathan Vanian, it took 2048 Nvidia A100 GPUs to train on 1.4 million tokens. This entire process to 21 days, or 1 million GPU hours; a cost of over 2.4 million dollars based on dedicated prices from AWS!
My last prediction here is that once GPUs dip below a certain price point to the point where organizations can scale with sustainable ROI, the composition of jobs we see today will most definitely shift.
What’s always hindered companies from replacing chat specialists, technical writers, and other content or text-producing workers is GPU and processing power. As scary as it sounds, there’s a real possibility some people — especially in companies trying to cut costs — could find themselves in this job composition shift: a change in the type of content-producing jobs in the industry. Notice I’m not saying people will lose their jobs and the end of the world happens thereafter; I’m saying the industry will witness a conspicuous pivot.
For example, imagine a copywriter who’s creating a new ad campaign for a digital consumer product. They could potentially delegate most of the actual “content creation” side of things to an AI assistant, but of course, the AI assistant will have missed or misrepresented various topics or concepts. LLMs hallucinate, after all. Their jobs would shift to more “quality control,” but they would still be able to contribute creatively. Based on this article by Cami Rosso, AI could potentially match human-level creativity, but consistency at a high level of quality is what’s most important, and humans can still top that.
4. Agents and Assistants everywhere
Last but most definitely not least, the rise of data agents and assistants in most LLM platform have also exploded and will continue growing in popularity. To boil it down, these agents and assistants help work as “actors” within the LLM process (from prompt to response). A data agent can help break down the context needed for a specific prompt and route the LLM to a specific data assistant, who is connected to specific resources. An example would be AWS Bedrock Agents. The assistant can then extract more content from them to provide a more accurate answer to the user, or it can perform a variety of other tasks that a general LLM otherwise wouldn’t be able to do. Check out OpenAI’s Assistants API for example.
Generative AI models don’t need much context to drive creativity in their answers, and the use of agents and assistants in the technology will help them act more grounded with tailored responses based on the situation. If an LLM platform has more of these agents and assistants, then it would be able to avoid generalization in its answers, which is a common problem with current LLMs.
Conclusion
As we see more tech and non-tech companies adopting AI, more excitement and fear will continue to brew. Every product in almost every market will eventually have some form of “AI enhancement” — blurring the boundary between AI and humans even further.
Of course, these are all my subjective predictions on what factors will matter the most for the future of AI and technology. While we all know many products we use daily will continue to incorporate some AI or robotic intelligence, only the future can tell us what will truly happen. What I do recommend is to pay close attention; we’re seeing a technological revolution before our very eyes.
About Me
My name is Kasey, AKA J.X. Fu (pen name). I’m passionate about writing, and thus I’ve found myself deep in the abyss on weeknights creating novels. I do this while working a full-time tech PM job during the day.
Follow me on Medium for more writing, product, gaming, productivity, and job-hunting tips! Check out my website and my Linktree, and add me on LinkedIn or Twitter, telling me you saw my articles!