Livingdocs AI Strategy: Our CTO's Perspective

Lukas Peyer

In this paper we aim to share our perspective on the current challenges news publishers face when it comes to AI, and how we will channel our focus to provide solutions that are genuinely effective and that actually matter. We will also show what this means for the Livingdocs Roadmap in the near future.

The speed of progress in AI is dizzying. The recent breakthroughs in deep learning have even surprised its inventors. Developments in AI-driven generation of text, images, audio and even video offer unprecedented opportunities – and challenges – in a myriad of ways. The crux of the issue: How can we as a software service provider make sure we invest our energy in the right places in order to provide the most value to publishers?

First let’s take a look at the current AI landscape in general.

The breakthroughs in large language models are only available via public APIs for a short time and only a handful of companies offer API access.

Indeed, available offerings are changing quickly. Along with that, the pricing is changing as well as the underlying AI models themselves. When it comes to AI, no company has settled on a business model yet. And it is likely that this state of flux will continue for a while. There is simply too much at stake for these players and they will continue to act boldly in response to the highly dynamic atmosphere.

In times like these where everyone wants to be a “disruptor”, finding the greatest potential is where the focus lies. At the moment, we see a race to explore the potential of expanding upon the breakthroughs with large language models to images, audio and video. Alongside, we are witnessing thousands of experiments, both pragmatic and risky, to determine where the biggest potential of these models is.

It is possible, perhaps even likely, that the focus of the most dominant players – like OpenAI, Anthropic, Google, etc. – will continue to shift in the coming years in response to these experiments. Following their lead, we anticipate that thousands of smaller companies and startups will try to come up with specialised offerings in various niches: some may go for incremental improvements while others may go for massive ones.

Given this metaphorical gold rush, how can publishers stake their claim, and more importantly, keep it?

Key Challenges: Where and how to invest energy wisely

The first challenge in this goldrush is that most players are not yet clear on exactly what they are looking for. Thanks to large language models, text is now a programming language. And text encapsulates everything on the value chain. Therefore, every aspect of the value chain is potentially eligible for AI enhancement. There are dozens, if not hundreds, of tasks, workflows and new ideas to which publishers could apply AI.

That being said, we have narrowed down the options to present four different areas in which publishers can invest:

  1. Saving time and costs in news production within existing workflows

  2. Replacing or radically changing existing workflows

  3. Develop new content formats and improve user experiences

  4. Building up in-house Know-How to leverage the current revolution and distinguish relevant from superficial changes

While these may be rather broad suggestions, they are designed to help channel focus objectively. When it comes to everything that is of general interest or that concerns large use cases (like text editing), there are enormous resources at work to find solutions. Therefore, a perfectly viable strategy is to experiment, watch and wait. Later, publishers can either buy services for what they need that are proven to be useful, or simply copy what works. Given the rapid development of these capabilities, investing immediately in these areas creates a risk that the adopted features will quickly become obsolete.

Investments driven solely by the goal to save time and cost are risky for the same reason. If a media company invests a lot to save a little time now with yet unproven methods, it is highly likely that another investment will have to be made a few years later in order to take advantage of the latest insights and tools.

Rather, investments should target bolstering in-house know-how and making publishing-specific improvements, especially those that concern important workflows or how readers experience the news products.

To that effect, investing in simple tasks like having AI assist in writing summaries or titles should be done primarily to build up know-how and experience how it can improve the news products, not with the primary objective to recoup your investment with time saved, for example. Many such “AI assistants” will likely have to be replaced within a relatively short time span by more efficient approaches or even become obsolete due to larger workflow changes. However, the experiences you gather while pragmatically experimenting, building know-how and addressing key workflow improvements should offer more clarity and increase the chances of choosing the right strategies moving forward.

Focus of Livingdocs

We suspect this AI revolution will influence and change every part of the publishing industry in one way or another. We already see changes taking hold, and expect this to continue in the long run. Not only will workflows and how readers experience news products be affected, but also how the news is perceived by society in general. This includes changes in how the attention economy will evolve overall.

At Livingdocs we want to contribute in areas that have the most long-term impact and will give us more strategic options in the future.

This means building up a solid foundation for a world where many more tasks can be automated and augmented by AI than what was possible even a year ago. We want to bet on the larger trends supported by proven results rather than “breaking news” in the AI storyline. We also want to create opportunities for our customers to experiment and gather experiences so we can approach this wide-scale evolution with strength through collective effort.

These are the larger trends we will focus on:

  1. Collaboration with machines

    It is no coincidence that chat has become the breakthrough application for large language models (LLMs); an interactive back-and-forth fits these AIs very naturally. Within Livingdocs, this means that AI tasks can be seen as ‘collaborators’ that now participate in all activities. However, we believe there must be transparency and accountability for this to work. Indeed, these models serve well to assist but cannot work well unsupervised in a news organisation with the exception of certain, well defined tasks that are simple enough. Therefore, for this collaboration to work, new user interfaces will be needed for various kinds of tasks.

  2. Text as programming language

    It is true: LLMs open up a world of new possibilities. However, you can’t give a language model precise instructions like you usually do in other programming scenarios. Moreover, ‘programming’ a language model does not need intensive technical expertise; yet this programming is greatly enhanced by a deep understanding of the problem you want to solve. In the context of AI assistants based on LLMs, their quality can change with the continual updating and fine-tuning of the underlying models, and therefore is notoriously difficult to assess. This means the newsroom should be involved during development and beyond. Livingdocs will look for ways to encourage and facilitate this kind of collaboration.

  3. Data storage and interaction

    Data has become a highly prized commodity. So much so that most rules regarding data are simply ignored by the ‘big players’ and data is used against copyright laws and policies. However, this does not negate the fact that data is valuable and that regulations are enacted to reign in the current ‘Wild West’ behaviour. Within Livingdocs, we want to make sure we keep an accurate history of changes when bots and algorithms work with your data. This helps both with quality control as well as any data analysis going forward. We also want to invest in APIs and concepts that make interaction with your data easy for algorithms and LLMs.

Livingdocs Roadmap

Looking at this landscape, what steps will Livingdocs take next?

  1. APIs built for automated assistants

    We invest in APIs to update metadata and content of documents safely and transparently while humans are actively working on them. We will also allow users to name AI assistants collaborating via the API in order to easily track them in the document history. Moreover, we will offer more APIs when working on documents in the Editor. And in all these APIs, we will put a lot of effort into the validation of the content to make sure that all changes conform to the current project configuration.

  2. Flexible UIs for AI assistants on any screen

    We want to offer a platform where news publishers can define simple commands – like writing a summary or inserting a subtitle into a document – with minimal effort. We have created a versatile menu that can be opened anywhere in the Editor to ask simple AI assistants for help. For instance, they could write a summary or create a document. Custom assistants can be added on your installation of Livingdocs. This allows for quick experimentation and building up of know-how as well as deploying multiple variations of the same assistant to compare and see what works best for a given newsroom.

  3. Real-time and event-driven architecture

    We are working on making more screens in Livingdocs, like the dashboards, ready for real-time collaboration. Our primary objective here is to enable faster collaboration between humans in the newsroom. However, this will pay off when working with AI assistants, as well. As some tasks take a while, editors may want to continue their work while AI assistants are working on delegated or scheduled tasks which, when completed, will trigger notifications to the editors.

  4. Feedback loops and evaluation

    As it is hard to evaluate the quality of LLMs, and the underlying models change often and unpredictably, we will look to integrate feedback directly into the UIs. One aspect we already mentioned is the possibility to have different variants of the same tasks so users can compare them side by side, even in a production environment. That being said, we have more ideas on this front to simplify creating extensions in a complex environment like a newsroom.

  5. Integrations

    Much of progress in the coming years will be made accessible by various companies offering APIs for specific use cases. We are looking into potential companies and services for integration into Livingdocs and are actively working on a closer integration of existing partners like iMatrics.

Customer Involvement and Partnerships

Having a long-term vision will help us at Livingdocs create momentum in the right direction. But to move fast, we have to move together, because there are so many options, that exploring them all alone would leave you in the dust. We ardently support our customers’ efforts to experiment with AI and gather valuable know-how, so that our collective knowledge will allow us to move faster and more effortlessly in the future.

We want to hear what you are building so we can give you feedback on what our next steps in this area will be. This strategy paper cannot cover all the details of what we are working on, but an ongoing dialogue will enable us to come up with better solutions and communicate them with the most clarity possible.

We are committed to taking steps to accumulate proven expertise in the AI domain in order to increase strategic opportunities both for ourselves and our customers as we continue on this ever-evolving journey together.