Kingland Thought Leadership

Want to Make Progress with AI? Start with a Question.

Written by Kingland Product Management | Jun 10, 2025 1:36:52 PM

We all know that AI is fundamentally changing how we work. Large language models (LLMs) are delivering new information and are enabling us to perform mundane tasks with ease. Every day we are inundated with news articles, press releases, and stories from other leaders about a growing number of AI success stories. We can’t turn the page without hearing about a 60, 70, or 80% efficiency gain on a key business process, or an innovation that shortened a monthly job from 3 days to 3 hours. Even a call to your mother might end up with a story about how she used ChatGPT to create a clever brochure for her local book club.

For leaders, the AI environment is becoming a pressure-cooker. If you’re an executive - maybe leading risk, compliance, operations, or IT - you are likely stuck in the middle of this world of AI optimism, expectations, and business complexity. How are you going to use AI to change the business this year? How are you going to deliver efficiencies and a better user experience? Are you going to hire new employees, or invest in AI agents? How are we using AI to stay ahead of our competitors?

While optimism, expectations, and complexity can be challenging to manage, AI capabilities are very real. AI can deliver very real results. In a large enterprise with hundreds of thousands of employees, millions of customers, and hundreds of legacy systems, it can be challenging to evaluate and select the right projects. As you’re evaluating your options and recommendations from your teams, here’s a simple framework to help you decide where to place your AI bets.

 

Start with a Question

When selecting an AI project, it's critical to identify some of the most important questions that are being asked in a business process. What is the core question that the business process is seeking to answer? Can I do business with this client? How do we resolve the issue in front of us? Focusing on the question is important for two reasons.

First, most large language models have been developed and refined to simulate language. In effect, they are designed to answer questions. Their answers may not always be perfect, but taking advantage of the speed, scale, and breadth of information is a great way to start to test the boundaries of a use case. LLMs can be a lot like a seven-year-old kid with an insatiable desire to learn. Imagine your favorite seven year old. What is that? Why does it look like that? How did it get here? Why did it do that? Why doesn’t it do this? What else does it do? Why does it do that? We all remember children like this (and maybe even some adults). The questions seemingly never end.

The same is true when it comes to AI use cases. While the questions could go on and on, it is incredibly important to begin with a very clear and relevant question that is at the center of your target business process. For example, here are a number of questions that are relevant to our clients at Kingland:

  • Who is this client?
  • Can I invest in this security?
  • Can I offer this product or service?
  • What are all of our relationships with this customer?
  • Do we have any conflicts with this entity or its related entities?
  • Is this person a director, officer, or key employee of this company?
  • What are the correct terms, rates, price, and payments for this transaction
  • What is the total value of all accounts, loans, and holdings for this customer?

Each of these questions ties to a key business process. Each question is the start of a use case and helps us understand the job to be done. The right question starts the process off on the right…or the wrong foot. Selecting the right question sets the tone for defining the opportunity to use AI to improve the business process in mind.

Are you sure?

With a good question in hand, the AI work is just beginning. Different foundational models behave differently. Different business processes require different context. Every industry has unique language that must be well understood. Inevitably the question is going to be answered…but can you trust the answer? This is where the hard work of executing on an AI solution in an enterprise context must happen.

Let’s take the second question from above as an example - “can I invest in this security?”. To answer this question fully, the AI models must be aware of who you are, the specific financial instrument (security) that is in question, and the rules, policies, and logic necessary to answer, “can I”? Those rules, policies, and logic likely require very specific information from various internal systems about what business is occurring, the nature of certain relationships, and even the status of conflicts, restrictions, and other limitations that would influence the “can I” question. After considering all of these different inputs, the question still remains - Are You Sure? Confidence levels can be produced to grade the relative confidence of the answer, but in a regulated, enterprise environment, the sources and logic matter.

Enterprise Requirements

To set your team up for success, it's important to unpack common enterprise requirements. Here are a handful of considerations as you’re working through your AI priorities:

  • Sources - Can any source be used to answer the question, or are specific sources not allowed or conversely, required in order to trust the answer. Policies, regulations, and even best practices may drive these results. Your best operations people intuitively know what sources are required…but an AI solution won’t unless you design for it and provide the right context. As an example, LLMs can be provided source and policy content that give more context for the models to better interpret and more confidently answer the question at hand.
  • Documents - Many processes are still and will remain document-heavy. These documents may be a rich source of information to either answer those key questions, or to augment the answers required by your business process. Bringing together natural language processing with your LLM will have the ability to bring additional precision to your AI use case, providing a new level of data enrichment from document-heavy processes that was previously a challenging endeavor.
  • Model Risk Management - While AI can deliver an answer, model risk management expectations will require transparency into how the answer was derived, the sources used (e.g. internal, external, trusted), and any logic. Managing decision making risk and avoiding unintended bias protects the integrity of the business process, but it also means that the solution may be more complex to implement to provide this transparency.
  • Users of the Answers - Most questions require a follow up question to produce a reliable result. Sometimes those next questions are straightforward, yet many times in complex financial environments, the answers and follow up questions can be very nuanced. If the users of an AI solution are people, does it work to allow them to engage iteratively in the process? If the users of an AI solution are other systems, can you build in safeguarding logic and automation to produce a precise result? Furthermore, system-based users of an AI solution may expect answers to questions be in a particular response output format or data schema, to better enable solution automation. For example, question answers in JSON format rather than natural language can better enable automation with downstream systems from the AI solution.
  • Data - Every AI process is going to leverage data - either at the forefront during the initial questions, or throughout the decision-making process. If the data has data quality issues, is inconsistent across multiple sources, or simply is not fit for the purpose of the question, the efficacy of the AI solution will be challenged. Planning for some level of data refinement or data controls is essential in most AI project selection processes.

These are just a few of the enterprise considerations, and there are many more. From system integration to audit to governance models, there are many enterprise requirements to work through.

Partner for Perspective

We are living in unique times where new AI use cases are going to emerge almost every day. As you select your use cases, harden your questions, and prioritize your projects, it's time to execute. Managing execution risk is the job of every executive and when it comes to AI, partners are a great way to improve success. Here are three considerations of how partner organizations can bring important perspective to your initiative.

  1. Removing bias
    Partners can see your use case and think about your questions without the unintended bias of your day-to-day operating environment. They can challenge the status quo of information sources, question prioritization, and even the business outcomes that may be possible.
  2. Industry Expertise
    Partners know your peers and they know other use cases. Your ideas are likely unique, but pairing those with what has worked for others in your industry, or even what has worked in other industries can yield great results. In regulated industries, the “strength in numbers” is a great approach to make progress and choose industry-accepted use cases.
  3. Technical Expertise
    Partners know data, they know systems, and they likely know different LLMs and technical options that your team may not be aware of. Seeking practical advice to help achieve your AI vision can take months off your implementation and can save you millions in missteps, particularly with new technologies.

Final Thoughts

As Albert Einstein famously said: “The important thing is to not stop questioning.” In this world of AI optimism, expectations, and business complexity, identify the important questions, experiment, and then ask - are you sure?

Progress awaits.

Still have questions? 

Please Like & Share!