“Don't draw me a pie chart [...] just tell me what I should sell tomorrow” – Avi Perez about AI-driven Business Intelligence

Business Intelligence and Artificial Intelligence have always belonged together. But as BI has grown a bit rigid, AI has taken the biggest leap yet with the advent of LLMs, says Avi Perez of Pyramid Analytics. Now, non-tech users can use the full power of data and take productivity to new heights. There are challenges still, like data quality or privacy, but Avi discusses solutions and warns NOT to miss that train.

The CTO vs Status Quo series studies how CTOs challenge the current state of affairs at their company to push it toward a new height … or to save it from doom.

“We’re at an inflection point now […] you might be missing something dramatic.”

In theory, it’s easy to understand why Artificial Intelligence could be a game changer for Business Intelligence. Instead of building charts or painstakingly query data, your users could just ask questions verbally and get the insights they need in response.

“As if it works like that!” you ask in disbelief. Well, you aren’t entirely wrong, but it seems that this ideal of an AI-driven BI that serves as an intelligent advisor rather than just a collection of pie charts is closer than you may think.

Avi Perez is working on it as we speak, and he told us how he and other players in the market tackle challenges such as:

  • what LLM to choose,
  • combining a generic LLM with a custom overlay vs training the LLM itself,
  • data privacy – using LLMs without sending any sensitive business data to them,
  • tailoring the LLM solution to a company’s unique needs,
  • measuring the impact of AI-driven BI.

Sounds interesting? Meet Avi and find out what the future of BI holds!

About Avi & Pyramid Analytics

Bio

Online analytics expert, a co-founder and CTO of Pyramid Analytics, at its helm since 2008. Avi’s vision drives the development of Pyramid’s enterprise analytics solutions. What’s more, his deep interest in data science and AI is the backbone of the company’s R&D. Prior to his work for Pyramid, he co-founded and managed a U.S-based healthcare analytics company, Urix, personally architecting its analytics systems.

Expertise

Data analytics, embedded analytics, data science, corporate strategy, Artificial Intelligence

Pyramid Analytics

Launched in 2008, with offices in Amsterdam, Tel Aviv, and London, among others, Pyramid offers enterprises an end-to-end embedded analytics solution that includes data prep, business analytics, and data science. Gartner® Magic Quadrant™named Pyramid one of the “visionaries” of the ABI marketplace, in part thanks to its gen AI capabilities. In May 2022, the company secured $120 million in investment at a nearly $1 billion valuation.

Pyramid Analytics in 2024

Hello Avi, How are you doing today? I know that Pyramid Analytics has been doing well recently. You’ve shared a screenshot from Super Data Brothers that shows Pyramid is the fastest-growing embedded analytics solution. But you may be even more proud that Pyramid Analytics has been featured in Gartner’s Magic Quadrant for Analytics and Business Intelligence (ABI) Platforms report.

How good is 2024 for you?

We had a spectacular year, being named a visionary in the Gartner Magic Quadrant for Analytics and Business Intelligence (ABI) Platforms. We’re on par with Tableau and just a bit off from Microsoft.

It’s hard to go against behemoths like Microsoft, SAP, Oracle, Google, or AWS. But you can compete with them on a product level, and you don’t necessarily need to be a behemoth to build the best product in the market.

The Magic Quadrant reviews the whole company, including the product. But our most important achievement is being included in the Critical Capabilities report, which is a sister report to the MQ and is just a review of the product. In that one, we’re clearly the number one ranked solution based on feature and function – we scored in both capabilities and then across all the different use cases.

You’ve been with Pyramid for 16 years. Did you expect that it would become one of the leading BI platforms in the world back in 2007

We didn’t expect to be where we are now. Our product hasn’t changed much in a while, but it takes a lot of muscle and money to get the message out there and educate the marketplace and analysts – it’s one of the handicaps of being a small company. 

The truth is that if I stripped the solution we had in 2007 to its essentials, you’d be shocked to find out it’d be roughly the same sketch as what we have today. It goes back even further. In 2001, my previous company, a vertically focused healthcare company in the US, attempted to tackle what Pyramid is solving today.

The Pyramid concept has been around for a long time, but only now have all the pieces come into play. Everyone recognizes all the elements the way we understood them 25 years ago. You could say that the analysts caught up with us. Our impression of where business analytics was going was right all along.

A big part of the puzzle is AI and what it has brought to the BI marketplace. Gartner’s report showed that we are clearly one of the leaders in gen AI integration into BI.

Can you tell me more about it? What’s so special about the way you handle gen AI?

Sure, but first, we need to take a step back.

AI has actually been a part of BI for a long time. It’s one of the more obvious places to stick it in. You want to prove the BI functionality to many users, but a lot of what you provide is complicated. You need all these magical tools to solve things. We call it AI. A lot of players on the market have more or less of it.

Pyramid Analytics has tons of AI. We have dozens of unique AI constructs and components throughout the platform that are AI-centric. Every one of them has millions of different parts.

The gen AI framework is relatively new, to the extent that it goes hand in hand with large language models (LLMs) and the deep neural network, which has come to the general attention in the last two years with ChatGPT and OpenAI.

The interpretive power of LLMs is in a different league than what was before. Its profound capability to understand what the user says and to respond intelligently is unbelievable. This is definitely a quantum leap. It’s pivotal in BI. We want to let people tap into the sophistication of analytics, but the vast majority of users are non-technical and don’t know the nuances of what to ask and how to ask it. The LLM bridges the gap.

Having said that, Pyramid’s gen AI functionality is based on a model where we outsource to the LLM only what we call recipe making. We don’t outsource the analysis and query. Pyramid handles the baking of the recipe, and the results are handed to the user.

It’s the symbiosis between the two technologies that produces a very sophisticated, unbelievably clever solution. The model is highly scalable and doesn’t require you to pump all your data into the LLM or to fine-tune the LLM. You can take any generic LLM, and in theory, it’ll work.

And most importantly, you don’t have to share your data with the LLM. Big companies with their top-secret mission-critical corporate data are not thrilled about sending it to OpenAI.

I think it’s a template that all vendors will use because the idea that you can fine-tune an LLM on your own in a way that allows you to build something unique with the data is a little fictitious at the moment as it stands today.

Pyramid Analytics has recently earned recognition from both Gartner and BARC

Artificial intelligence – it takes ever more to stand out

AI is a big part of what I wanted to talk to you about today. Before we get to the main point, let’s talk about AI itself for a while. Of course, everyone does (or wants to do) that today, but one of my previous guests told me that many companies still don’t understand just how practically useful AI has become for businesses in a very short time. What do you say to that?

I can’t stress it enough, so I’ll repeat – the interpretive ability to understand language on the way in or to accept it from the user and to respond intelligently on the way out is a dramatic quantum leap from what came before it. That solves one of the great barriers to AI deployment – the interface.

Having said that, gen AI still has  ways to go, especially in the BI space. There are significant challenges to solve before I can honestly tell you that gen AI will start to displace traditional BI constructs completely.

Everyone thinks that the end goal of business intelligence and analytics is the pie chart. It’s not. This is just a means to an end. The end goal of analytics is to look at the data and make a decision. What the user actually wants is to hear: “You don’t need to look at the pie chart anymore. Here’s the decision you look for”.

I always tell people the analogy about Captain Kirk from Star Trek. He never used any kind of software to program or direct his machine how to fly. He just said: “Enterprise, fly me from here to here!” He could care less how you get there. Imagine that I could talk to the machine and say: “Don’t draw me a pie chart of my sales so I can work out what I should sell tomorrow. Instead, just tell me what I should sell tomorrow.”

We’re already close to that in some industries; think GPS or mapping tools. You type in the address and tell the car to drive. Of course, you still have to drive, but what we’d really like to do is sit back and tell the car to drive us.

Self-driving cars may become commonplace soon, isn’t that right?

Yes. It’s imminent. And I want the same for business decision-making.

I want my software to tell me what I need to do tomorrow to be more profitable. And the next step from there may be: you do the thing that will make me more profitable. You plug yourself into all the assembly lines, turn on the machines, and decide which cars to make or which chairs. In the meantime, I’m going to be at the beach.

Gen AI clearly has a lot of potential, but there are also some limitations. At the very least, they exist at this moment. 

Yes. The interpretive capability, which I talked about, is only the first step. The second step, the analysis, is still kind of lousy.

The LLMs are not designed for doing mathematics. If you look on the internet, you’ll find that there’s a lot of interest in building deep neural networks that solve mathematical deterministic-type responses. I think it’ll happen eventually. And it’s going to be a dramatic change.

On top of that, there’s a headache about using current live data with LLMs. These models are trained on data. When you ask ChatGPT what the cost of pizza is on Madison 5th in New York, the answer is based on some data points it found on the internet three years ago. Even if they can find a way to get some fresher data, their interpretation of facts is based on what it was trained on.

But my corporate data is not sitting on a website. It comes from a corporate structured database. The LLM that we understand today is not prepared to interpret all this data, which is like two billion rows or five petabytes sitting on Databricks or Snowflake. 

This is one of the problems that Pyramid solves today. It is a very interesting technique for gluing LLM’s ability to interpret words with Pyramid’s ability to interpret data. We continue to make it act more and more intelligently without having to move the data into the LLM, which is, as it stands today, somewhere between fictitious and impractical.

You mentioned attempts to make LLMs solve math problems.  What are some other AI trends you are particularly interested in these days?

I can tell you about the challenges we’re trying to solve in the BI and AI space today. They are all about finding ways to ask more powerful questions and get more accurate and specific answers about corporate data.

The first group of initiatives relates to being able to query your own corporate data and blend fragments of it into the LLM. The model should combine the data fragments, my question, and its deep neural network learning capability to give me an intelligent answer.

Imagine an accounting system. It doesn’t have paragraphs of text explaining profit and loss. It’s a big database with tables. One table is for the account, another table is for dates, and another one is for cost centers or business units. The last table is the fact table, which has all the numbers in it. There isn’t much text to it. There are a lot of small pieces of information that together build a story, but they are kind of bland on their own.

Gen AI is good at handling text – RAG (Retrieval-augmented Generation) processes document-type data stores, fragments of text and paragraphs, and anything text-based. However, it doesn’t work well for tabularized data like that of the accounting system. The problem is that the vast majority of corporate decisions are made on such data. Tables contain your financial data or a big part of your CRM data.

We’re working on translating RAG into something that works well with tabularized data. It generally boils down to auto-vectorization. We’ve got working prototypes, and they are capable of some unbelievable things. But we still need to figure out if we can make it work at a scale.

The other group is about being able to create formulas. I’m not talking about simple things, such as: “sum up my sales figures.” We’re working on introducing real corporate equations that have a concept behind them that you shouldn’t have to explain, such as: “Show me my gross margin percentage year on year by customer.”

The model needs to understand what the gross margin is and who defines it. What’s more, it needs to understand what year a given piece of data belongs to. It may be easy for humans to do, but for the LLM, it’s a different story.

But we’re getting there. We’re closer and closer to a world in which we can ask our software directly how to make our shop more profitable. It’s a profound idea. Whoever gets there first is going to build something phenomenal.

Pyramid’s moving towards an ever-smarter actionable data analysis

AI-powered Business Intelligence – the conversation

Let’s get to Business Intelligence. Of course, BI is a very old discipline, but AI has introduced a lot of fresh air and innovation to the field. One of our recent guests said that developers adapt businesses to frameworks, but AI adapts itself to your business. In the BI context, one could say that with AI, instead of learning from the past, you predict the future.

What are the most significant differences between a traditional BI solution and an AI-driven one, in your opinion?

If I go back 30 years to the 1990s, which sounds like an eternity, BI was about building reports or visualizations. It was designed by developers who handed reports to end users, who were primarily business people. Developers emailed the report, or worse, they printed it out and handed it on a piece of paper.

In the 2005-2010 era, there was a big leap toward self-service. The tools became so easy to use that business users could build their own reports, slicing and dicing them as they pleased. This required more development, but it gave businesses more power. It was easier to think of an idea and implement it on the spot rather than sequence a project, hire the resources, and then wait three months for that to be implemented.

Around 2015, BI extended into the data space. It was the era of data preparation. People started spending time cleaning or fixing data, interacting with the semantic layer, etc. This was good because good data is a good precursor to quality data analysis.

However, nobody really wants the developer to present them with a perfectly curated data warehouse. They want to build their own one or use their own spreadsheet.

In the years 2015-2020, you could observe further merging of BI and data analysis and then a true rise of data science with all the tons of information and billions of rows. The problem was that it was all very developer-centric and scientific. An ordinary Joe couldn’t play with this stuff.

The big challenge was to make it more user-friendly. We’re still in the middle of that cycle. Somewhere down the road, AI has become the center of attention as a way to simplify a lot of the sophistication in data science and data prep.

In its latest incarnation, around 2021-2022, Gen AI brought a quantum leap in interpretive capability. Suddenly, people realized that they could do a lot of things much more easily than ever before.

In theory, AI promises to unleash even more of that cleverness and sophistication that allows less technical people to do data science. We’re not there yet, but we’re getting closer.

Business people may be the only ones to benefit from this. Would you agree that gen AI-driven BI also increases the importance of the CTO in BI-related decision-making? AI is, at its core, a very technical subject, and CTOs will be the ones implementing it.

Definitely, but there’s still a lot of confusion, even among CTOs.

The CTOs of our customers often tell me that they can’t wait to implement ChatGPT on their website to answer the customer’s questions. But LLM itself doesn’t have the answers. The LLM can act as the interpretive bridge to get you answers. But unless you’ve specifically trained it on your specific business problem, it doesn’t know anything about it.

Take a big company like Bank of America – ChatGPT has no idea how to solve savings account issues for its users. Perhaps it could get some content on how to solve savings account issues for customers from 50 different banking sites, but that would only produce a generic answer.

Even worse, it may completely hallucinate, as it is sometimes referred to. It will give you a specific answer, but it is not at all how Bank of America really solves the account issues. Every bank does it a little bit differently.

However, it would be much more useful and functional if I could prompt the LLM to ask me what kind of response it needs to give and then produce a response unique to me. 

Maybe big companies can try to retrain general models based on their own specific corporate requirements. But if you know something about training LLMs, you realize how fictitious it is today. A lot of people have slowly understood that fine-tuning LLMs is a little over-sold. It’s not going to go the way you think it is.

This is why I believe the Pyramid model will win in the foreseeable future. But there’s still a challenge here – we’re handing you a generic solution that is untrained on your data, business cases, language, and nomenclature. We need to make it work with anything.

This confusion is not the only thing that may hold CTOs back from using gen AI-driven BI. For example, they may have a very big legacy system and are not quick to innovate. The CTO took an interest in AI and wants to try it. How would they begin a conversation and gain support for something like this?

Let’s say that you’ve invested in what is today a legacy BI platform – it may have been developed 30 years ago and hasn’t really kept up with the times. There’s no AI functionality. You may wait for some AI features to be added, but it may never happen.

Sadly, it’s pretty clear that these legacy vendors are not going to invest in AI. It’s too difficult to retrofit it into their system. If you look at the platforms that were built around 25-30 years ago, you can see that they either don’t have any gen AI or it’s very light.

When you use a legacy BI platform, you may contemplate side-by-side deployment – one for the old world and one for the new world. But I would argue that at some point, you need to move on. People often say to me, “Wow, that sounds drastic.” I ask them if they are still running around with a Nokia flip phone or working on an i386 chipset. If they use modern technologies, maybe it’s time to try the same approach with analytics.

Besides, legacy solutions, by definition, are meant to be changed. Depending on which solution you take, that change model can be dramatic or not.

The most difficult part to change is the integration on the data side. I’ll use an example of how we did that at Pyramid Analytics.

Pyramid focuses on what we call direct querying concepts. You take Pyramid, point it at your existing data state (could be in a data lake, database, or data warehouse), and we work on it immediately. So, to use Pyramid’s gen AI construct, you first install the platform, which takes less than an hour, and point at your existing data assets. Five seconds later, gen AI works out of the box. 

We have a unique focus on the market. We don’t copy your data or ask you to restructure it to fit how we need to query it. Time-to-result with Pyramid is measured in hours, and the result is as good as you’ve seen in any demo we’ve given. Some other companies are working on variations of that model, too. Others require you to make dramatic changes to your data strategy or ditch it because they will not work with it at all.

The reason why not having to compromise your data strategy is so important is that the tail shouldn’t wag the dog. In this case, your analytics solution and gen AI shouldn’t dictate your data lake strategy. It should work to complement it.

Any CTO today needs to decide where they want to land on that continuum. In my personal experience, it’s better to gradually improve rather than retool, reskill, rebuild, or reinvest. Otherwise, what could be a five-hour exercise turns into a five-year story. That’s why I recommend picking a solution that keeps up with the times. Then, focus on improving your integration with it. Otherwise, you may be left out of the gen AI revolution.

Pyramid offers easy access to data for your non-technical employees

AI-powered Business Intelligence – the implementation

Let’s move on to the implementation. A CTO has convinced the stakeholders and is preparing to advise on the technical implementation of an AI-driven BI solution. What considerations should they tackle?

The first question will be: which LLMs do I want to use? Pyramid supports multiple LLMs, and you can choose to use more than one at a time.

When choosing an LLM, you need to consider the cost. There’s a definite pricing differential between GPT-4, the Azure AI suite, Google Gemini Pro suite, Claude, Mistral, or any other solution.

In my experience to date, OpenAI’s GPT-4 suite produces the highest-quality results for what we do with it in our testing. It’s also one of the most expensive of the lot, and so is Claude. Mistral and Google Gemini run for less, but the quality is a few percentage points lower. There are trade-offs there.

Languages are also a problem. OpenAI supports lots of localizations. You can speak to it in German against an English data set, and it’ll respond to you. You can switch from English to Spanish and back to Japanese. It will know what it’s doing all the way through – it’s phenomenal. Smaller and cheaper models don’t have that many languages. 

The last issue is privacy. We’re integrated with IBM’s watsonx. It allows customers to use LLMs that are off the grid, either because they run it themselves on their own hosted environment or because they use it through Pyramid’s hosting. What’s more, Pyramid is not sniffing your data. Nothing is shared. We’re on the other side of the same location.

Privacy is the number one concern for many businesses when it comes to LLM. In real company deployments, companies try to find out how secure the interaction with the LLM is – not just the data but the question itself. No CTO wants to find that their accountant accidentally typed in top-secret gross margin facts in their question. It may not even be all the information, just a fragment of it.

Think of the privacy issue in AI this way – you only want your staff to ask real business questions for real data. The more real and problematic the dataset is, the more top secret the data is. Good questions/problems and secrecy/confidentiality go hand in hand together.

When you ask the LLM a question, you should feel confident there’s no one listening and that it’s just for you – no one else is taking ownership of that. Such understood privacy as it stands today is a highly sought-after function. I believe that there’s a lot of reluctance to get going with LLMs because of that single issue. 

You also need to decide to what degree you want to unleash AI on your users. For example, you may want to have certain datasets and data models that are not gen AI-enabled because you feel concerned about your users. You feel like it could produce the wrong results.

On a related note, you need to decide how you’re going to onboard new employees. Luckily, the gen AI stuff is relatively easy to use – that’s the whole point. But it doesn’t exist in a vacuum; it exists in the context of many capabilities and features.

To sum up, the ingredients for a CTO implementing Gen AI into BI are choosing the right LLM, language, privacy, and adapting the solution to their organization.

In preparation for all this, what in-house capabilities should a CTO foster? Some believe cooperating with a third-party vendor experienced in AI may be a good way for a CTO to obtain that extra expertise – through workshops and day-to-day work. What do you think about that?

The first thing to recognize is that the next-generation AI interface, with all its magic, is reliant on quality data, a problem that has existed in the BI space since time immemorial.

The problem is illustrated by the “garbage in, garbage out” concept. If a pie chart is based on an incorrect or muddled database, then whatever the pie chart says is itself muddled.

Gen AI itself may work very well, but is underpinned by the quality of data, the way it is structured, and, therefore, the semantic layer that sits between the generative engine and the data. The semantic layer is also only as good as the data structure.

It may sound crazy to reduce the implementation of AI to something as uninteresting as basic data, but a lot of the other pieces have been covered pretty well and work out of the box. What kills everybody, including Pyramid, is a poorly designed database or semantic layer.

Data is also the area where third parties have the biggest role to play. This is also the traditional space for third parties in the BI space. The challenge is to come in and design a well-constructed and high-performing data estate that takes full advantage of its technology stack. Another is to build a semantic layer that matches up with the way business users want to consume data. If they can do that, then gen AI will fall into place.

So it’s all about the data. But there’s also a question of how to measure how well you’re doing it – the definition of done. How can an organization make sure that they are on the right track – one or two years down the road?

The first thing you can measure is user adoption. You should find out how many users log in over a certain period and how they use the features and the data themselves.

User adoption is a relatively simple metric to capture and it’ll give you an idea of whether the data is used ultimately to drive a decision. If you provide a specific pie chart, find out if users look at it and get some insights they can act on.

You can also measure the number of report requests that your developers get asked to resolve. If it goes down, you know that your less technical users get answers to more questions more often without having to go to someone further up the food chain to get them the answer.

If you develop BI self-service technology, you will see that difficult questions require a more skilled user to articulate or resolve a given pie chart and its logic. And if that user is utilized less and less over time, which can be measured in hours or a number of requests, you will know that the gen AI solution is beginning to close the gap between sophisticated problem-solving and simplified UX.

To sum it up, I’d focus primarily on measuring user adoption and how busy your most technical users are with helping other users get the most out of the solution. Some other metrics worth looking into are how often the AI tool is opened and used or which database is used most.

General advice

Let’s put it all together. What would you advise CTOs and organizations that have yet to embrace AI for their Business Intelligence solutions but are in the process of learning more about it?

You should know that the current generation of AI for BI is a quantum leap compared to everything that existed before. If you’re looking for a way to lower the cost of doing analytics to drive data-driven decisions, now’s the time to get involved and take advantage of it.

Based on what I already said, you should also know there are pretty good answers within the ABI space to the problems of data security, performance, and cost management.

If you choose to sit the current AI-driven revolution out, it will make your less technical and non-technical users unable to top into the wealth of information stored in your data. That is a potential competitive disadvantage that you’re not addressing. And it’s likely your competitors are. We’re at an inflection point now that if you don’t take advantage of it, you might be missing something dramatic.

Again, I can’t stress it enough – the current gen AI is much more than just an incremental improvement from the last era or generation of AI. It really is a quantum leap.

Resources

Thank you, Avi, for your insights. Let’s close it with some recommendations for learning resources. Where can managers learn even more about AI-driven Business Intelligence?

I get an enormous amount of information from LinkedIn groups – far more than you can get on Twitter. LinkedIn gives you a professional overlay around very specific user groups.

You can go straight to the analysts. At the moment, Gartner is way ahead in this regard. Two years ago, they realized that AI was the next gigantic tip of the spear. They have a wealth of knowledge about it. I don’t agree with everything they say, but they can give a lot of guidance to people who want to find solutions in the BI space for AI.

Obviously, Gartner is not free-for-all. You have to buy a subscription. But if the AI expertise is relevant to your organization, it’s well worth buying. They’ve done a lot of research on it for many years. Many big companies have already gotten a subscription.

AI for Business Intelligence – what’s next? Four actions for CTOs to take

Controlling your data like Captain Kirk controls his ship, conversing with your analytics engine like with the best buddy – It will be amazing to watch where the future trends will take AI-based BI. You can be a part of that progressing revolution. Just:

  • continue to work on the quality of your historical data – that’s where it all begins,
  • pick a solution that takes advantage of the latest and best in LLM technology,
  • think about how you want to adopt an LLM-based solution to your organization’s needs – you may need to start small,
  • measure the impact of predictive analytics, data visualization, and gen AI-driven BI right from the start and adjust accordingly.

Persist, and perhaps soon, you will find yourself at the forefront of something really special.

Do you want to learn more about how Pyramid Analytics changes the world of Business Intelligence with AI?

Check out the official website for expert content, including articles and webinars.

Just released!
The State of Frontend 2024

Performance is the #1 challenge in 2024. 6028+ answers analyzed.

Read now

The Software House is promoting EU projects and driving innovation with the support of EU funds

What would you like to do?

    Your personal data will be processed in order to handle your question, and their administrator will be The Software House sp. z o.o. with its registered office in Gliwice. Other information regarding the processing of personal data, including information on your rights, can be found in our Privacy Policy.

    This site is protected by reCAPTCHA and the Google
    Privacy Policy and Terms of Service apply.

    We regard the TSH team as co-founders in our business. The entire team from The Software House has invested an incredible amount of time to truly understand our business, our users and their needs.

    Eyass Shakrah

    Co-Founder of Pet Media Group

    Thanks

    Thank you for your inquiry!

    We'll be back to you shortly to discuss your needs in more detail.