Toward the end of 2017, Harvard Business Review published its annual list of “Best-Performing CEOs in the World”, based on a number of HBR’s chief executive ranking metrics.
What was compelling about the 2017 listing was its consistency with the prior 2016 listing – 72 of the top 100 CEOs were repeats, and of the 28 CEOs that “fell off” the 2017 listing, 11 had retired from their companies. The takeaway: consistency in longer-term growth and results represents the mark of a great CEO.
The CEO, our fourth and final text analytics persona, was inspired by the decision-making influence and drive for consistent growth of the world’s best chief executives. The CEO’s role within our Text Analytics Platform is to apply extraction logic and make decisions on data within our Collector’s data lake that previously had been tagged and annotated by the Scholar. Ultimately, the CEO persona and components facilitate the final step in our Platform’s content processing pipeline – turning unstructured data and information buried within documentation content into structured data and relevant, actionable insights.
Recognizing the key functional and non-functional requirements inspired from the greatest needs of and from chief executives were important when building the architecture for the CEO persona and the designs for its components.
Let’s take a deeper technical dive into each of these requirements from the perspective of our Text Analytics Platform CEO persona:
Scouring the Data Lake
The Collector’s data lake, filled with tagged and annotated data from the Scholar’s processing, represents the summarized information needed for the CEO components to make key decisions on data and insights extraction. Logic and algorithms implemented within smaller, atomic CEO components, or imported as custom Python code, assist with the extraction. As a decision-training bonus, supervised machine learning via neural network models – leveraging Keras and TensorFlow open source technology – can be utilized by the CEO components to continually add decision-making experience for the benefit of future data and insights extraction decisions.
Scaling
Containerization via AWS ECS allows for the scaling required by the CEO components, enabling a higher capacity to consume Scholar-processed information. The Collector’s data lake REST API endpoints are architected for data access scalability, leveraging AWS ALB in front of a Python Flask web framework contained within a Docker container. In order to scale the re-training processes for the ever-growing amount of decisions made off of a larger capacity of information, AWS GPU instances are leveraged for the deep learning required by expanding information sets.
Learning from Experience
While deep learning and supervised machine learning via neural network models have already been covered and have shown benefits within the previous two requirements, additionally incorporating human assistance feedback is crucial to driving accuracy of results higher and improving the overall decision-making capabilities of the CEO. Included in the CEO’s components and capability is an Exceptions Processing UI that provides workflow-governed management of exceptions that arise from the CEO’s decision-making. An exception represents a CEO component’s decision that doesn’t meet a specified confidence or accuracy threshold for data/insight extraction. The Exceptions Processing UI allows human analysts to assist the CEO’s decisions by correcting the data/insight extraction where necessary.
Let’s say the CEO has extracted data regarding the Directors and Officers of Apple from Apple’s annual report documentation. If the confidence in its extraction of the Officers' data is less than 90% (likely due to challenges with documentation formatting), the data it has extracted, plus the relevant highlighted section within the annual report document containing the data, will be shown to the human analyst in the Exceptions Processing UI. The analyst can review the document and the extracted data, and make any adjustments through the UI’s workflow-governed data edit screens, including re-highlighting the correct section of the document (if necessary) where the data is found. All actions taken by the human analyst are recorded and leveraged as neural net training data for improving the CEO’s future data extraction decisions.
Delivering Insights
As the final step in our Text Analytics Platform’s content processing pipeline, the CEO persona and components represent the “gateway” to end-consumers of its extracted data and insights. It essentially is responsible for notifying end-consumers (i.e. Subscribers) of extracted data and insights via its Data Delivery module. Additionally, Data Standardization components within the CEO persona allow for consumer-specific translation of extracted data – think of this as an actual human CEO tailoring a message for a particular audience, when giving a speech in public. From a technical standpoint, the Data Delivery module includes a Subscription sub-component that allows for external consumers to subscribe to extraction events via an API call. After subscribing, the consumer is notified of extraction events via a pub-sub message queue. Data Standardization components receive raw, extracted data and perform a series of standardization, formatting and normalization steps on the data as needed for specific end-consumer integration requirements.
Working with the other Personas
While the CEO persona and components represent the final steps of our overall Text Analytics Platform processing pipeline, we have seen through this blog series how all of the personas work together to form a solution. The CEO really does need the input and processing power from the Collector, Scholar and Inventor personas and components to do its job – really not much different than we see MacMillan, CEO of MacMillan Toys in the all-time classic Tom Hanks movie “Big” working with his team on new decisions and ideas. The CEO can make the “big” decisions, and as we’ve seen technologically through our personas, there is a heavy reliance on supervised machine learning, deep learning, and even human analyst corrections (where necessary) to provide the absolute best results.
Thanks again for joining me through this deeper dive series of our Text Analytics Platform personas – the Collector, Scholar, Inventor, and CEO. I’m extraordinarily passionate about everything we’re working on within the Kingland Platform, so please stay tuned to future blogs as we share additional features and capabilities that we’re working on within our Platform.
These Stories on Text Analytics
No Comments Yet
Let us know what you think