Recapping This Year's AI Revolution and Forecasting 2024, The New Age of GenAI

2023 will forever be known as the year of Generative AI (GenAI). When Google launched its first-ever search engine in 1998, the way we found answers changed the course of history, with "Just Google It" becoming a bedrock phrase in the human language. This radical paradigm shift is happening right now with the explosion of LLMs and GenAI, officially adding "Just put it in ChatGPT" to our vocabulary.
This shift is directly correlated to the evolution of human-computer interaction. With Google's search engine revolution, the human-machine relationship was one dimensional, with a simple input/output model: ask a question, get an answer. The age of GenAI launches us into a multidimensional relationship with technology, one that allows us to co-create, human and machine, hand in hand: give a prompt, get a full blown content piece, presentation, top notch sales pitch — you name it.
There is no question GenAI has received mass hype at a wildly accelerated pace. The field attracted over $26B in funding, $17B this year alone and $11B in the US over the last 5 years. So the real question is, with a shift this massive, what other possibilities should we bring our attention to?
To answer this, we need to zoom out and recognize that there is an entire new realm of opportunities the age of GenAI brings, including dramatic infrastructure changes to the entire tech stack. We also need to take a closer look and understand the dynamics we've observed between two giants in the space, Google and OpenAI. OpenAI has undoubtedly started the revolution at a godspeed rate, but quickly met competition with its close competitor Google, and their recent launch of Gemini. This raises an undeniable question — does OpenAI have the ability to maintain competitive advantages over its competitors in order to protect its long-term profits and market share? Or will slow and steady win the race on the battleground, with legacy companies such as Google winning the gold medal for bringing best value to end users?
In this article, we'll recap how the explosion of GenAI shaped this past year, and highlight what everyone should expect in 2024, the new age of GenAI.
Adapting to the here-now future of the dynamic new age
With a new age, comes dramatic changes that the tech ecosystem must adapt to quickly in order to survive the process of natural selection. There are a few factors that differentiates trends from permanent evolutionary changes, with the top two factors being ease of use and radical, fast adoption by end users. GenAI tools took the world by storm, with LLMs products such as ChatGPT gaining one million users in just the first week. This radical adoption leaves the tech industry no choice but to change the way the tech stack is structured right now in order to keep up.
In the past the tech stack was segregated, but LLMs are now challenging the tech stack to work together more harmoniously. End users have now had a taste of complex prompts being written and given in a matter of seconds, we must understand that this is now the standard expectation that is driving demand. It can't be emphasized enough, that in order for the tech ecosystem to survive, the tech stack needs innovation upgrades that allow for performance to be 10x more productive across the entire tech stack, working better together while handling larger workloads, speed of calculation, prompt construction, stronger computational power, better hardware and massive amount of data is required.
The new tech stack that's dominating the conversation today
To get a clear grasp on the current state of AI, let's break down the five main layers of AI we see in the industry today.
Infrastructure:
The change at this level is dramatic, and there is no doubt there are winners and losers in this space. NVIDIA, the leading AI chip creators, takes the trophy home in terms of the infrastructure. They bet early on AI, and their impressive GPUs that are a prominent element in the GenAI tech stack is paying off for them profitably.
Data:
Mature data is a key ingredient for success. Data will need to be newer, cleaned and mastered at massive volume amounts in order to make it through the enterprise data lifecycle. An area for opportunity here are RAG models, which will help accelerate enterprise adoption with the promise of enhanced accuracy. This is a huge opportunity due to the fact that many of the legacy systems core architecture doesn't currently meet the need of LLMs in production environments.
Foundational Models
Questioning if one winner will take it all has been a hot topic this year. The truth is, the arena is still wide open for other players to get in the game, bringing answers to the plethora of other questions. Knowing if LLMs will become a commodity or not, if enterprises will prefer closed source models over open source, or whether the developers community will be such a ?— only time will tell.
Fine-tuning and tools:
The significant need of fine-tuning models will require expertise across a variety of domains and skill sets, including psychology, software engineering, literature, linguistics and more. This is a thrilling new area in which we expect to see large innovation activity. The market dynamic raises an interesting question as to whether or not model providers, such as AI21 Labs and OpenAI, will provide an additional suite of services including these fine tuning capabilities, or if they will leave this opportunity open for third parties to bring this value.
Applications:
Here we see what is classically referred to as a "red ocean," but to our observation, it's a new kind of red ocean. The way in which we now differentiate the sharks from the fish differs from the previous questions asked, such as how well funded the startup is, the strength of the technology and the mindset of the founders. These questions are now replaced with more simple questions — how much value are you bringing, when and at what price? An example of how we're asking these new questions can be found in the gaming sector. The gaming world is a text heavy audience, with a realm of LLM features that can be implemented within existing games. As an example, Israel alone, there are already a few players in the gaming space, looking to incorporate GenAI capabilities into their games. The competition between these early stage startups will be hard and in our eyes the winner will be the one with the best product but also with the largest community.
The battle of open vs closed source
The explosion of open source GenAI projects is no doubt intense. According to GitHub, there are currently over 8,000 GenAI open source projects, ranging from commercially backed LLMs such as Meta's LLaMA, to experiential open source applications. On one hand, open source models offer immense potential for developers and the machine learning community, serving as fertile ground for innovation in AI-powered features and applications, accelerating the pace of development and democratizing access to cutting-edge technologies. However, the growth of these open source models raises a critical question: Are they posing a threat to their closed source counterparts?
Open source models are not without their challenges, two major concerns being privacy and trust. Currently in open source environments, there aren't sufficient safeguards protecting sensitive data. Also, the security and regulatory aspects of these models are often more complex due to the diverse nature of contributions and lack of centralized control.
The fundamental difference between open and closed source LLMs lies in their approach to source code and training algorithms. Open source models allow for greater flexibility and customization, enabling researchers to fine-tune algorithms to their specific needs. This openness accelerates development and can lead to more effective solutions for niche applications, such as support for local languages. On the other hand, closed source models, typically developed by larger legacy corporations, have inherent advantages. They often come with built-in filters for biased or inappropriate content and robust security measures. Additionally, they are designed to be user-friendly, eliminating the need for specialized fine-tuning skills.
Open-source projects like Baby AGI and AutoGPT, and close-sources such as Inflection AI and Adept, are making strides in autonomous AI. Their success showcases the tension between open and closed source models, reflecting the beneficial differentiators of opening your source to the community which opens you up to more data, versus the benefits of a trustworthy, closed source model, in which you have more control over your data. These models crystalize this shifting paradigm, whose primary objective is to decrease the complexity of use.
Strengthening trust in the human-machine partnership
As with all new changes, skepticism and distrust is to be expected. The concept of out with the old and in with the new is no exception with this new revolution. We can't ignore the fact that just as new industries will emerge, others will disappear. The shockwaves GenAI sent to adjacent markets has already manifested, with companies losing over 50% due to their business model being disrupted by Gen AI, while companies such as NVIDIA, the leader in AI chips, ramped up more than 100% in H1 2023.
Regulations will play a major role in building trust. We should expect and demand regulated transparency as AI evolves, taking the trust of the public even further. Without healthy boundaries and ethics involved, AI can quickly go from humanity's greatest asset, to its greatest weapon. Countries such as Israel, have already taken initiative with its recently published "AI Regulation and Ethics Policy Principles", aimed to guide Israeli government authorities and regulators. Its main principles lay out the fundamental roadmap requirements for the future of AI regulations, with special care and focus on the areas of privacy, equality and non-bias.
Expect to see "Made with AI," as regularly as you see "Made in This Country" when it comes to products and content. Regulations should be put in place for example, if an image only used 20% AI to sharpen it, this could be considered "human made" vs 80% of an image generated by AI, that would be labeled, "Made with AI."
Upfront privacy terms will play a major role in educating end users on how their interactions with GenAI tools are implemented. Meaning, in a short time from now, no one should still be guessing if the data they share is private, where it is going and how it will be used.
A new realm of opportunities:
The time to invest in AI has never been more ripe, which is precisely why at Pitango, we're intensely focusing our efforts in this industry. As we know from the history of AI, the investment trajectory strongly mirrors Sir Ronald Cohen's 'Second Bounce of The Ball' concept, illustrating a shift from early hype to pragmatic progress. Initially, AI lured massive investments driven by soaring expectations, however, the industry then experienced a recalibration phase, the "AI Winter." Now, the AI sector is wildly demonstrating its value across various domains, from business optimization to market innovation, officially demanding its 'second bounce,' a phase marked by technological fulfillment and commercial viability. I want to make it crystal clear that this phase is not just a technological leap, but a massive commercial opportunity, indicating a pristine moment for investors.
As an investor deeply embedded in the technological landscape, I strongly believe the emergence of Gen AI presents a plethora of opportunities across various sectors. The multifaceted applications of Gen AI, ranging from R&D to customer support, are not just transforming businesses but also reshaping the future of work, innovation and even the way we operate our daily lives outside of work.
We will also see new releases and advancements in Multimodal AI tools, an entirely new AI paradigm in which various data types (image, text, speech, numerical data) are combined with multiple intelligence processing algorithms to achieve higher performances. A company leading the troops in this fierce arena is Google, with their recent launch of Gemini, a next-gen, multimodal AI model that can process text, images, and audio. The opportunity multimodals bring to the table is the advancement of ease of use for end users — the holy grail of value in this age.
An area that will remain the Garden of Eden of opportunity, is enterprise adoption. Enterprises are increasingly cognizant of their lack of readiness for AI integration, with challenges such as compliance with data regulations, deploying large language models (LLMs) in production environments, and addressing issues like hallucinations. However these challenges also present big opportunities for startups and companies to capitalize on. The main areas startups should focus on are: language blades that help meet organizations' unique knowledge base, stronger reliability with contained environments, and focused capabilities that are focused on specific use cases to bring value to the end user.
AI21 Labs provides an interesting case study in this arena. They've developed task-specific models that go beyond traditional LLMs, tackling complexities related to deploying these models in various environments, such as client Virtual Private Clouds (VPC) or on the AI21 Studio platform. They're addressing issues related to data structure, quality, ease of use, and most importantly, deployment challenges in production environments. Key focuses include input/output validation and comprehensive evaluation of factors like language fluency, coherence, and factual accuracy.
In the realm of multi-agent systems, we're witnessing their impressive capabilities, where multiple AI agents with domain-specific expertise work together for more nuanced problem-solving. This approach enhances AI's capability to handle complex tasks more effectively. Parallel to this, autonomous agents are becoming increasingly prevalent. These AI entities operate independently, performing tasks without human oversight, and are expected to significantly automate and streamline work processes. By automating repetitive tasks, autonomous agents free up human time and effort, with up to a whopping 45% of paid tasks predicted to potentially be automated by AI. The chatbot market, for instance, is projected to reach $1.25 billion by 2025, reflecting this trend.
Another incredibly promising field is cybersecurity. Leveraging LLMs can significantly enhance security protocols, offering a sophisticated approach to identifying and neutralizing cyber threats. These advanced models can analyze vast amounts of data, recognize patterns, and predict potential vulnerabilities, thereby strengthening a company's defense mechanisms. Companies can develop new tools and strategies using LLMs to stay ahead of evolving cyber threats, ensuring a more secure digital environment. This intersection of Generative AI and cybersecurity will pave the way for a new era of digital protection, where AI's capabilities are harnessed for good, to create a safer online world.
The power of the Startup Nation
We firmly believe the Israeli ecosystem is a dominant player in the GenAI arena. There are several upcoming companies we believe are aligned with the pressing needs of the industry.
Run.AI is bridging the gap between ML teams and AI infrastructure, by abstracting infrastructure complexities and simplifying access to AI compute with a unified platform to train and deploy models across clouds and on premises.
AI21 Labs the developer of Jurassic 2 Ranked among the top LLM in the world competing with OpenAI. The company is working in a dual approach, proving their foundtail models via AI21 studio and ready to use applications such as Wordtune and Wordtune- read which reinvent computerized reading and writing.
Illumex unifies business data language with a turnkey Generative Semantic Fabric that streamlines the process of data and analytics interpretation and rationalization. Highly regulated and data-intensive enterprises are currently using their platform to activate their Data Culture and Digital Transformation initiatives.
Qwak streamlines the entire ML development lifecycle with a single platform, helping to easily build ML projects at any scale using a centralized platform that contains everything needed to build ML.
Guidde is the generative AI platform for business that helps teams create video documentation 11x faster.
At Pitango, we were early believers in the revolution of GenAI. We began investing in the space with a focused and clear understanding that the entire tech stack would undergo a dramatic change, so it could address the needs of the everchanging AI world. Over the past 10 years, we've invested across all layers of the tech stack in order to shape the industry into its destined future. These are some of our portfolio companies that are revolutionary players in the space, changing history as we know it.
Infrastructure: Graphcore has built a new type of processor for machine intelligence to accelerate machine learning and AI applications for a world of intelligent machines. Speedata is accelerating analytics to the speed of data, helping boost apache spark and presto price-performance 100x at the hardware laye, Volumez is leveraging the power of composable data infrastructure, opening a realm of opportunities in building next-generation cloud-native products using the most high-performing, predictable, and simple-to-manage data infrastructure layer available.
Foundational: AI21 Labs.
Application: Swimm is a knowledge management tool for code, built for dev teams committed to effective and optimized knowledge sharing. D-ID The only platform that lets you generate an AI avatar video in 30 sec. Need I say more?
While this year has given us many answers, we're also left with many questions about the GenAI revolution, which unfortunately LLMs can't answer for us (yet). Only time will reveal these secrets, but one thing is certainly clear — the tech ecosystem will never be the same. So buckle
up, we're in for one thrilling ride.