Solutions & Products
ArvatoSystems_GenerativeAI_AdobeStock_556454258

With Quantum Leaps Forward: How Can Companies Now Adapt to Generative AI?

Interview with expert Martin Weitzel

Generative AI - How Do Companies Have to Position Themselves Now?
05.09.2023
Digital Transformation
Artificial Intelligence
Technical
MA_Martin_Weitzel

What does Generative AI mean for knowledge, creative work, and software development, and how do companies now adapt to these influences? 

 

Arvato Systems subject matter expert Martin Weitzel, author of Deliberate Diligence and Head of Group Innovation Management at Arvato Systems, answers these questions and more. 

Generative AI

Generative AI is an umbrella term for AI that generates something unprecedented based on historical data. This distinguishes it significantly from Discriminative AI instances that merely differentiate between different types of input.

Martin, Artificial Intelligence has been a topic for a long time. In the past, progress has been relatively slow. Recently, however, generative AI seems to be making huge quantum leaps. Is that a subjective feeling or a fact?

 

Emotionally, I feel the same way. I have been dealing with artificial intelligence for a long time in my role as Innovation Manager. My earlier memories of the first AI projects start around 2016, 2017, when the first chatbot wave came over us. At the time, I have to be honest, I was highly frustrated by these technologies. Artificial intelligence was promised. It turned out to be more like "If, Then, Else" conditions, i.e., classic software development. Every dialogue was somehow shaped manually, and a data scientist had to take over whenever you wanted to make a new use case with the chatbot. If you deviated a bit from the dialogue path on the user side, the bot stopped working. That was super frustrating and also damaged the scene a bit. 

 

At the same time, however, research continued, of course. Among other things, entirely new approaches were developed. This has led to the development of large language models

Large Language Model

A Large Language Model is a large generative language model with artificial intelligence. It is based on neural networks with transformer architecture and is able to understand, process and generate natural language. The models are trained with huge amounts of text and sometimes have several hundred billion parameters. Source

I learned this method with GPT-2, trained in the background for vast amounts of data. With GPT-3, it became pretty clear where the journey would lead. Today, at the latest, it is also transparent to the broader masses what is possible with generative AIs. 

GPT

GPT stands for Generative Pre-Trained Transformer and is an artificial intelligence-based text generator. GPT was first introduced in 2018 by OpenAI introduced. It was subsequently followed by more powerful versions, including GPT-2 in 2019, GPT-3 in 2020 and GPT-4 in 2023. The newer versions are only accessible via an API, while GPT-2 is available as open source software is.

Returning to the initial question and the subjective feeling: There is an exciting phenomenon, namely that we humans only look linearly at the development of such technologies. That is, we see a state of growth and think it will perhaps be 10 or 20 percent better the following year. But if we compare this with other technologies of the recent past, such as cloud storage, CPUs, and so on, the reality is that these technologies are improving exponentially.

 

Therefore, we would have to assume that today's AI progress will not be 10 to 20 percent better in a relatively short time; therefore, we would but four times, ten times, or twenty times as good. And then an exponential curve emerges, which, of course, becomes steep at some point. Exactly when it begins, the effects of surprise are created. Then, you go from one quantum leap to the next and are surprised by the enormous progress.

 

 

Let's look at the impact of AI on our working world. That artificial intelligence will change our work, how we work, and the work itself is less surprising. What we are seeing now, however, in light of current developments, is that generative AI is penetrating areas that we humans initially considered our untouchable core competence, for example, creativity and knowledge work. How do you assess the situation: Will AI disrupt disrupt that we didn't have in mind before? 

 

I read a book the other day that painted a future scenario in which care workers and craftsmen drove Porsches, and the knowledge workers were the ones who had relatively low-paid jobs. 

It's a bit of an upside-down world from today's perspective. But that is actually what could be the consequence of this technology now. 

 

I remember three or four years ago; I was on the road with a presentation to media companies where I showed the potential that exists when you automate media value creation via software. At that time, there were already excellent examples from other industries, such as Shopify. Shopify. At that time, I wanted to show that the building blocks for this were there. GPT already existed at that time, and it was also foreseeable that at some point, with images and videos, it would come to this. As far as being ready. While giving this lecture, I noticed something exciting: The creative people, the journalists and so on remained firmly convinced that technological progress would not disrupt their business.  

 

However, knowledge work and creativity are, excitingly, the first things that are perhaps really at risk of disruption. disruption-at risk.

There are no reliable studies yet, but there are initial forecasts on the market, statements that go in the direction that knowledge workers would be able to work 36 percent more efficiently through generative AI, for example. Which, thought the other way round, would mean that perhaps 36 percent fewer knowledge workers would be needed. There is also much speculation about what the current developments could mean for software development. One scenario could be that the marginal costs for software development, for example, could also approach 0, as has already happened historically with other things. For example, with memory prices, bandwidth, or CPUs. This would have the effect of making software development cheaper and cheaper. This would mean that salaries in software development would decrease, we could work more effectively, and software development would become more affordable. 

This is great for me as an "innovation person", of course, because the biggest obstacle to prototypes and MVPs has been that software development is simply so expensive. Assuming it would come to that and software development would only cost a fraction of what it costs today, you can develop and build much more customized software and are no longer so dependent on commodity software. Commodity commodity software. That would have a significant impact on the tech industry. 

 

Software development is just one example. The same is true for a knowledge worker like me. In my job, it's about absorbing a lot of information from different sources, storing it, and developing solution-oriented concepts. This is precisely what the Large Language Models can do quite well: draw insights from various data pots, derive conclusions, and generate outputs, for example, in a PowerPoint. As a knowledge worker, I am a candidate who would prefer to do at least part of my tasks augmented by AI in the future. I deliberately say increased, because I don't believe people will completely replace knowledge work. Instead, I think that we can work in a more leveraged way, concentrate more on our core tasks, and use AIs to eliminate tedious tasks and take complexity out of the work. 

Finally, wherever there is knowledge work, the question must be asked: Do we still need the same number of knowledge workers as before, or do we need architects who have the big picture and the vision in their heads? 

 

I think these questions have to be asked now in every industry and every business. And ask them quickly because we live in a "cloudified"world. For example, Microsoft Office and comparable software come from the cloud today. The productivity levers are there as soon as AIs are integrated into these products, from one day to the next. 

 

As a company, you should then, of course, quickly know what the implications are for your processes, your business model, etc. 

 

 

You are also working with Bertelsmann Group colleagues on this topic to clarify issues. What would you share from these experiences? How should companies position themselves now?

 

The most important thing in advance: you always think you are too late. The ChatGPT topic and Generative AI have been top-of-mind in the market. Nevertheless, from my point of view, it is not too late. It is precisely the right time to think through the issue calmly and systematically without rushing. 

ChatGPT

ChatGPT is the further development of OpenAI GPT into a chatbot that uses artificial intelligence to communicate with users via text-based messages. The US company OpenAI released it in November 2022.

AI is not a topic that a specific department can somehow deal with. AI can be used everywhere in the future, in all processes and corporate functions. 

That's why an orientation workshop is an excellent first step, where you invite really large, functional, and diverse colleagues with different functions. Initially, everyone should be picked up and educated: what does Generative AI mean? Which variants are there? Which are particularly relevant for your own business? Then, step by step in the workshop, systematically examine what AI means for the individual areas. For example, for legal or marketing.

What does AI generally mean for the company processes and the business model? 

What does this mean for the content of the products? What customers do you have? 

 

Everything can be collected, prioritized, and evaluated in the orientation workshop before the first selected use cases can be tested. 

 

 

Which roles and competencies do you think will become essential and tend to become even more important? 

 

I would look at the issue from different angles. For example, if you have an article generated and look closely at it, you will see that it is average. It's certainly not bad, but it's not outstanding either, and honestly, it's so average that five minutes later, you no longer remember what you read there. "Aha" elements or personal experiences are entirely missing. Sometimes, there is meaningless stuff, for example, when utterly new ground is being covered. For me, this means that anyone above average in their job, no matter what it is, will also be safe from disruption. 

 

On the other hand - and I think we have to be honest about this - anyone who underperforms or performs the simplest tasks in processes then has a real problem. That is precisely what AI is disrupting. Or rather, the by someone disrupted by someone who understands how to use AI to do that. 

That's something you should personally reflect on and consider in your career. 

 

And for the other perspective, we have to sort out again what Generative AI is good at and what is not. It's a bit like an onion skin. What people are seeing right now is that, for example, with ChatGPT, generate text or shorten text, lengthen text, transform text, whatever. Honestly, I don't think that's the critical part. That's more the playful aspect of the AIs. Of course, if you're a content creator, that's also crucial, but what's much more exciting about ChatGPT is that they are good at taking complexity out of complex systems. For example, I'm not good at maths, so I'm not good at statistics or statistical methods either. Every time I have to read in-depth how to set up complex formulas or something like that to conclude, I have to ponder for a long time which method to use. 

Now, there is a demo that shows that, in the future, plugins can be used with ChatGPT; one of them is Wolfram Alpha. This is such a mathematicsstatistics, stochastics tool. If you now integrate this rather complicated tool into ChatGPT, you could use ChatGPT for what you want it to do, with which data ChatGPT then serves or better: translates for Wolfram Alpha and returns the results from Wolfram Alpha.

WolframAlpha

An online information retrieval and presentation service based on Mathematica software, developed by Wolfram Research.

Wolfram Alpha is a semantic search engine designed to process content through specific algorithms to produce results.

Back to business: We all use SAP systems, which may be similarly difficult to use, or any complicated databases. ChatGPT would serve in perspective as a mediator to whom you tell what you would like to have done. ChatGPT could also address several systems, mediate between them, and route data. In my view, this is the real benefit that e.g. ChatGPT offers. ChatGPT can provide. I look at tickets in the ticket system and route them. AI will replace such a job in the relatively short term. However, if I were a strategist who knows precisely how the company has to develop and position itself, who knows the customers, etc., and would thus be able to manage and deploy an AI in a targeted manner and act as a director of this AI. Then, I would be relatively safe because these jobs are becoming more and more necessary. 

 

You could say you need fewer developers using but more architects. Or you could say that you need fewer marketing specialists and more marketing generalists.

 

Well, the opportunity lies a little bit in using artificial intelligence, in being able to use it sensibly for oneself and the company. Martin's final question: Artificial intelligence, or generative AI, a curse or a blessing for the future?

 

Yes, that's a bit of a crystal ball. So we haven't talked about the dangers yet. There are some. What about propaganda, for example, fake news, influencing people? What happens when the whole internet is just flooded with AI content? I see that on LinkedIn, for example. Now, dozens, hundreds of new content creators are just ChatGPT content and cluttering up the platform. 

 

And that is only a tiny risk. Then, some say that the development of AIs must be stopped because it could have catastrophic effects on humanity. AI must be should be treated like nuclear weapons. 

As I said, we have an exponential development here. Humans are stupidly bad at predicting such exponential developments with global radiance, so it's absolute guessing. 

But by my nature, I am always an "opportunity person". I always look at how you can derive an opportunity for yourself or your business from something new, and I think with this attitude, you are on the better side. That is also the only thing you can influence.

 

 

This blog article reflects an excerpt from the podcast interview with Martin Weitzel. again. Listen to the entire episode now (only available in German): 

 

You May Also Be Interested In

Generative AI

Generative AI such as ChatGPT and Midjourney can be used to completely rethink existing value chains. Arvato Systems supports you in this.

Hyperautomation

Discover the possibilities of hyperautomation with Arvato Systems and increase the efficiency and productivity of your company.

Written by

MA_Martin_Weitzel_Cloud
Martin Weitzel
Expert for innovation topics