ABB Marine & Ports experts and partners discuss the emergence of ChatGPT and how Large Language Models (LLMs) can be optimally used in a marine engineering context.

ChapGPT, driven by developer OpenAI’s GPT4 engine, has attracted massive attention since its launch in March. It has become so talked about that it is now shorthand for LLMs in general. Microsoft followed up with its GPT4-based Microsoft Bing search chatbot, Google with its competing Google Bard, and recently even information hub Bloomberg is in on the act with the release of its BloombergGPT purpose-built for finance.

ABB Generations was party to a free-wheeling conversation on LLMs featuring Peter Sarlin, CEO, and Tarmo Pajunen, Business Development Executive, Industrial Sector, at Silo AI, the largest private AI lab in the Nordics; Igor Balashov, Senior Data and AI Specialist, Manufacturing, at Microsoft; Roy Funck, Head of Technology, at ABB Marine & Ports and colleagues Tomas Tengner, Global Product Manager, Energy Storage Solutions, and Ola Hjukse, Portfolio Manager for Automation and Control Products.

The ‘packaging’ is the revolution

Roy Funck (ABB): “Generative AI is itself old news, but engineering it for public use is the new frontier. Other AI offerings are more powerful but not wrapped in the same user friendliness. ChatGPT is basically a smart super-search engine able to respond to prompts with human-sounding answers.”

Peter Sarlin (Silo AI): “GPT models are just one aspect of large language models (LLMs) and LLMs one aspect of generative AI. We have been working with these models for years. They’re self-supervised systems designed to solve what is called a ‘token’ or word prediction problem. They learn to predict the next word in a sentence from a vast dataset. In simple terms, these models merely provide a probability distribution of the most likely next words, and we’ve now gradually learned to better control, instruct and finetune them to create value for specific tasks. This is still far from ‘super-intelligence’ or from how the human brain works and human-level performance.”

Tarmo Pajunen (Silo AI): “The whole world is now aware of ChatGPT because of its impressively successful product launch. But as Peter and Roy say, the real revolution is not scientific but on the product side, in the value-creating user experience and conversational interface. OpenAI has spent a lot of money on making it fun to use.”

Peter Sarlin (Silo AI): “It’s important to remember that generative AI applications like ChatGPT are not creative in themselves, but good at creating novel content by combining individual elements from the vast amount of data provided to them. The quality of their output is dependent on the quality of the data they’ve been trained on. They have the potential to help enormously in content creation, search tasks and human-machine interfaces. Quality control should, however, still be a human responsibility as the output of these models isn’t guaranteed to be factually correct. In most cases, the more controlled, instructed and fine-tuned a model is for a specific use case, the better the output.”

Igor Balashov (Microsoft): “I conquer with Peter’s perspective. It is noteworthy that Language Model Platforms (LLMs) such as ChatGPT and Microsoft Bing Chat, both built upon the foundational models GPT-3.5-turbo and GPT-4, signify a significant advancement in the process of democratizing search capabilities. These platforms mark a substantial departure from conventional search engines, introducing a user-friendly approach that substantially augments access to information and facilitates the exploratory quest for knowledge.”

Peter Sarlin (Silo AI): For me, the launch of ChatGPT was a ‘Tesla moment’ for generative AI. When Tesla triggered a hype wave around self-driving cars, it served as a trigger for a wave of investment in autonomy-related tech. We’re seeing the same thing now; rather than being transformative on its own, ChatGPT will spark significant investments in generative AI as a value creator in software products. OpenAI has made a big contribution by paving the way.”

ChatGPT is basically a smart super-search engine able to respond to prompts with human-sounding answers.

Versatile coding assistance

Ola Hjukse (ABB): “Given that it can feed on the entirety of GitHub (the world’s largest code-sharing platform designed to simplify project collaboration and management), ChatGPT can help to write, test and improve code. It can clean up existing code by correcting mistakes, simplifying complex ideas, and flag up bugs. At ABB, we can use it to get part of the way in writing, for example, executable Python code for simple applications, or controller code for motor and converter controls. It’s cool to be able to write code without having to be an expert in the Python mindset. It’s efficient in all kinds of languages.”

Tomas Tengnér (ABB): “That’s right. I actively use ChatGPT in my work to generate code for useful everyday functions. For example, I struggled to code a fast tool that can highlight any category of abbreviations in a Word document, but was able to do it in ChatGPT. It also clearly understands the purpose of the code. For example, I pasted in a snippet of code from GitHub and it described exactly what the code did (downloading TV shows from a Swedish streaming service). It also commented on the code in a pedagogical way by adding a warning about infringement of copyright laws.”

Tarmo Pajunen (Silo AI): “I’d caution that getting ChatGPT to, for example, create a poem for a colleague’s birthday is one thing, but it’s quite another to write software for real-world applications using code that it pieces together from someone else’s code from GitHub. That could potentially open up a hornet’s nest of intellectual property rights (IPR) issues. Because if your code isn’t all open-source, to whom does it, or bits of it, actually belong to?”

Ola Hjukse (ABB): “In cases where we do use open-source data to develop a certain piece of software, often we will start out using a code base facilitating general functionality not specific to marine – for example, the layer that sorts data, handles interfaces, and generates alarms. This we can get, for example, from online R&D use forums. All this needs to be in place before we start on the ‘marinizing’ bit, developing code specific to the maritime use case, which is the value-adding layer. Generating the base layer has traditionally been a very manual approach where we can now save effort and money by using LMMs, before we focus on the marine-specific application.

One proviso is that the usefulness of the LLM is dictated by the quality of human input. The better the framing of the problem statement or prompt, the better the answer. For coding in the marine electronics context, it will force the experts to adopt a better-thought-out design approach earlier, versus the earlier tendency to shift responsibility to software developers with loose definitions. The experts will really have to analyze key system functionality questions ahead of time, which could result in faster prototyping with end-users.”

Simplifying documentation and design processes

Igor Balashov (Microsoft): “From a contractual standpoint, AI chatbots or Language Models (LLMs) offer the intriguing capability to generate robust contract frameworks based on user instructions. Non-legal individuals with clear objectives can create comprehensive draft contracts through simple descriptions of requirements. However, professional review and translation remain essential before these drafts attain legal status.

This technology also enables revisions and corrections. Beyond legal use, it aids in generating marketing plans. The streamlined process of editing existing drafts eases the review process, showcasing the power of this feature.”

Tomas Tengnér (ABB): “In the context of having the LM trained on internal proprietary documents like design guidelines, ChatGPT could also be useful to support the drafting of routine policy documents, as well as content assistance for our sales team, who in our case often get inquiries or specific questions from potential clients regarding batteries and battery chemistry. Our sales colleagues could potentially use ChatGPT to generate relevant technical explanations without having to spend a lot of time researching the material themselves.”

Accelerating the research phase with ChatGPT can support faster decision-making.

Igor Balashov (Microsoft): “Within the context of advanced technological developments, it is evident that the potential for leveraging models trained on existing design data and relevant physical principles holds immense promise, particularly within the realm of industrial design optimization. For instance, when confronted with the intricate task of refining ship or vessel system designs, these models emerge as valuable tools, capable of effectively addressing complexities such as the strategic layout of systems, encompassing components of an electrical nature.”

Tomas Tengnér (ABB): “In addition, you can save time on basic research, for example when you’re trying to get to grips with a new field. If you have the luxury of consulting an expert you might expect to get an answer in hours depending on the complexity of your request. The next option would be using a conventional search engine but that still requires wading through a lot of information just to answer your one question. Accelerating the research phase with ChatGPT can support faster decision making.

Roy Funck (ABB): “Google Bard has the advantage of being able to access up to date information where the public ChatGPT stops at 2021. The latest GPT4 and Microsoft Bing also provides references/sources for answers, which is very useful for more scientific research.”

Igor Balashov (Microsoft): “It’s important to note that Microsoft employs various generative AI technologies beyond GPT-4. Bing Chat serves as a public LLM for democratizing search, but Microsoft’s cognitive services portfolio is multifaceted. We’re integrating LLMs and generative AI into traditional MS Office products. The Copilot framework in Microsoft 365 and Microsoft Power Platform layers OpenAI tech, offering AI-driven assistance for content creation and tool development, democratizing creativity.

Our focus extends to enhancing user experiences by linking with Azure cloud. For coding, we’re introducing a novel approach: customers can describe solutions in everyday language, and Copilot translates it into low-code apps or websites. This minimizes complexity, expediting development for both citizen and professional developers.”

Overall benefitsand limitations

Tomas Tengnér (ABB): “The most salient benefit of using LLMs is boosting efficiency and productivity. Models like GPT4 can assist us in doing a better job, while saving time frees us to focus on our most value-added tasks. In a sense they provide individuals with superpowers to move faster towards mastery – a master marketer, master speech writer, master coder – allowing us to reach further.”

Roy Funck (ABB): “I just want to emphasize again that the answers LLMs generate are not always right. You can’t 100 percent trust the answers ChatGPT comes up with. It’s just a mathematical model. It produces answers that may sound convincing, but, since it is only a model, it has no real clue as to what it is doing.

“In addition, from a general perspective, answers could potentially discriminate against certain sections of the population as they might include hidden biases reproduced from the vast public domain dataset. You also have to be careful not to embed biases in prompts.”

Ola Hjukse (ABB): “In the same way as the discriminatory perspective is a clear area of sketchiness, so is the fact that LLMs may deliver wrong answers. They are not a replacement for software engineers but rather an assistant. Code generated by ChatGPT should always be verified before implementing. Everything needs to be checked. Keeping humans in loop acting as a validation gateway of output is necessary. Again, you also need to be careful of legal issues in terms of it reproducing code that has IPRs connected to it.”

Tomas Tengnér (ABB): “I agree. Right now, you definitely have to quality check the results but probably very soon you’ll only need to do regression testing on the source code to quality check the outcome.”

Models like GPT4 can assist us in doing a better job, while saving time frees us to focus on our most value-added tasks.

Next steps in LLM commercialization?

Tomas Tengnér (ABB): “It will be interesting to explore with our partner Microsoft training an LLM on our own proprietary data to maximize that knowledge base. The model would likely be adapted from Bing GPT4 but in a bespoke way because we don’t want to release information to competitors. Only Microsoft would have access. Competitors are in the same position, of course. But the key is do we have sufficient own corporate data for the model to become smart enough to useful?”

“Because GP-4 now includes multimodal capabilities, i.e., it can understand images, it could open up the possibility to train models on our single-line diagram design instructions for Onboard DC Grid™. This would help to automate a lot of daily work especially in the bidding phase of projects. Owning a bespoke model trained on our own proprietary data has great potential.”

Ola Hjukse (ABB): “It could be very useful also for our technical support service, as engineers would be able to use our own siloed data to come up with suggestions as to what caused a specific issue. That would save them a lot of time.”

Igor Balashov (Microsoft): “I concur. A pertinent advancement for all companies lies in crafting GPT-4-based models capable of contextualizing conversations. ABB has the potential to establish a contextualized Chat, utilizing internal data alone.

We’re actively collaborating with ABB on pilot initiatives to enhance customer service via generative AI for summarizing case resolutions, freeing humans from routine tasks. Additionally, our GitHub Copilot promises significant strides in software development’s agility. The interplay between GPT-4 and OpenAI’s DALL-E for image generation holds promise for product design aesthetics.

It’s worth emphasizing the necessity of innovative players and heightened competition to drive technology adoption across diverse industries.”

Peter Sarlin (Silo AI): “Quite right. Customers we talk to are concerned about using closed generative AI models. I definitely think the commercial future of LLMs is in secure, bespoke downstream applications built on top of open or otherwise accessible base models, fine-tuned on narrow proprietary data sets, and trained to solve specific use cases very well.”

Tarmo Pajunen (Silo AI): “ABB has a huge pool of proprietary data where generative AI could be very powerful in bringing new ways of utilizing, reusing and accessing that data. But in general terms of monetizing LLMs going forward, I believe it will happen with various different types of interfaces and integrated to products and services. I believe that a minority of use cases are best solved by manually chatting to AI with text interface. It will be interesting to see what it looks like in two to three years’ time.”

ABB has the potential to establish a contextualized Chat.

The genie is out of the bottle

Tomas Tengnér (ABB): “Things are moving extremely fast. Big companies have to deploy this technology because they can’t risk being left behind. Where would you be if a main competitor used it to radically improve the efficiency of their processes from coding to marketing and sales, for example? The challenge is that nobody quite knows where it is going, or what is happening inside the ‘mind’ of the models. We can certainly use of GPT4, but what will GPT5 bring?”

Roy Funck (ABB): “What is certain is that the existing regulatory framework simply doesn’t apply any more. The speed of regulation is glacial compared to this exponential progress, so we need a new approach and fast. In a few years’ time, our main competitor may be an AI system with no humans involved. What would people do if we rationalize ourselves out of work? Do we really want that?”

Tomas Tengnér (ABB): “It’s a bit scary but I also think these type of AI systems can be of immense value to humanity. They may come up with counterintuitive solution that can be used to develop new concepts. The internet contains so much information that could be combined in new ways, connecting different fields of expertise that no human could. I certainly think they can be very helpful in the development of my specialty, which is battery and electrochemical technology, but it will need to be carefully managed.”

ABB has a huge pool of proprietary data where generative AI could be very powerful in bringing new ways of utilizing that data.

Roy Funck (ABB): “At the same time, we should never trust a super search engine that combines texts. It can certainly dig out data but there is no guarantee the assumptions will be correct. We will have to be careful about where we use the technology in terms of actual benefits rather than hype.”

Igor Balashov (Microsoft): “Our current engagement with AI is productive. If the outcomes extend further than human understanding, it prompts a need for thoughtful contemplation. In the coding realm, GPT-4 LLM can identify correlations, unexpected solutions, and permutations that may not be apparent, owing to GitHub’s extensive training dataset. Thus, the development of solid and responsible AI frameworks remains vital, reflecting Microsoft’s commitment to responsible AI principles. Additionally, it’s essential for society to collectively assess suitable contexts for deploying this technology. Along these lines, we firmly advocate for the inclusion of regulatory measures to help shape the industry.”

Peter Sarlin (Silo AI): “Primarily I would focus on the possibilities and opportunities that come with these new technologies, but I agree that we will have to also be concerned about potential risks. For instance, despite not being artificial general intelligence, implying human-level general intelligence, today we have AI that passes the Turing test. The implications of that will pose risks that we need to consider.”

Blink and you might miss the next exponential leap in generative AI. Watch this space.