Navigating the era of generative AI as a pro

Abeer albashiti
5 min readJul 20, 2023

What is the meaning of generative AI? Is it standard for all industries? Is it substantial to embrace it and look for opportunities? How do we operationalize it in our life and business without compromising our singularities? Who is leading the scene now? How do we include diversity inclusion, data privacy, security, and governance?

Let’s start with scholar Lotfi Zadeh's quote; “The Closer one looks at a real-world problem, the fuzzier becomes the solution.” This applies to all real-life situations, including relationships, goals, passion, work, etc.

If it’s hard for us to predict, detect, and recommend something, it will be harder to impossible for the AI models to do it. This does not mean systems with generative learning capabilities are not credible, yet they rely on their knowledge of the data; they master the data patterns with high efficiency and accuracy, NOT the truth or reality.

I agree with the definition of generative AI systems, tools, and models as brilliant Parrots that augment our life experiences to focus more on what we are singular on, like the 360 degree-perceptions of things, situations, data, and context. This supports empowering humans’ innovation, creativity, critical thinking, and compassionate behaviors.

To harness the power of generative AI, experts in multi-disciplinary fields shall work together to produce aligned systems where outcomes are aligned with humans’ requirements, prompts, and expectations.

This means democratizing all stages of data acquisition, modeling, deployment, monitoring, and justifying is necessary, where diverse team members, leaders, and regulators from different fields and backgrounds enrich the models with their perceptions.

And the question will still be open, who will lead this, and how do we identify biased outcomes?

Creating an AI model for all requires massively representing all groups in the development phases. For example, Emotion AI is the most urgent field that requires regulating and democratizing to build empowering models, NOT manipulative, misleading systems that produce a false perception of our emotions that affect our image and identity.

Thus, generative Emotion AI is a double-edged sword.

As humans, having a high self-awareness and knowledge requires continuous self-purification, which supports recognizing our emotions and utilizing them to serve us. An upgrade of this is having Emotional intelligence, where we can sense and interpret others' emotions, then communicate effectively to support our interactions and relationships.

And imagine that if we want comprehensive Emotion AI models, all of the above need to be included. However, after working in this field for over a decade, I am confident that Emotion AI models are valid in specific, well-defined contexts with acceptable accuracy, requiring massive, diverse patterns and continuous intervention from domain experts to fine-tune parameters and modify biased outcomes.

It’s critical to mention that some applications like healthcare and wellbeing, where Emotion AI is heavily used, require special oversight as factual correctness is problematic, i.e., false positives and false negatives are critical and may produce harmful outcomes.

Thus, Multimodal Large Language Models are great tools for building responsible Emotion AI models that can augment our life experiences with more humanized technologies that harness the power of humans' most sensitive parts, i.e., our emotional responses.

Let’s dig deeper into the Large language models concept, named in the Stanford University Human Centric Artificial Intelligence Center, as foundation models were trained on broad data using self-supervision at a scale that can be adapted to a wide range of downstream tasks [Ref.].

Again, being self-supervised on a large scale means generalizing outcomes, which is perfect in specific scenarios and harmful for others, especially when the data or developers do not equally represent the users' population, meaning racist, biased annoying results will always be there. Thus, fine-tuning them based on the context of the use case on hand is not debatable, and all leaders should start doing that now to harness the models' power.

In addition to that, evaluation criteria have been developed to ensure building, deployment, monitoring, and justifying the models responsibly. Stanford Holistic Evaluation of Language Models (HELM) is one of the leading reliable evaluation frameworks based on the latest research.

Thus, as business leaders, embracing the Responsible AI concept from day one, where a chief ethics officer contributes to the technology creation at all phases, is necessary now more than ever.

Again, to augment industries and workflows with the power of generative AI, leaders, entrepreneurs, product designers, and any industry creator shall consider users, community, and society experience and evaluate and mitigate risks associated through co-creating Human-Centric AI models where Data Product Canvas methodology at the heart of the designing, developing, deploying, monitoring, evaluation, and modifying processes. Such an approach fulfills diversity inclusion, equality, trust, and fairness and mitigates bias in the data and outcomes.

As expected, the leading players in generative AI research are Stanford, Berkeley, and The Alan Turing Institute. And on the industrial level, Open AI, Stability AI, and DeepMind dominate the field.

Now, let’s take one actual usage for ChatGPT at businesses that have AI elements in their technology or provide data-driven products, which is the ability to evaluate their position compared to others regarding technology, business cases, and knowledge base. Business Leaders with tech backgrounds should continuously check these tools to ensure their offerings are still relevant — NOT obsolete.

And while doing that, they should be aware of the type of data they share to avoid disclosing their singularities as a business on a giant transformer like ChatGPT; if so, they will compromise their existing solution. Then, they need to work fast to innovate to avoid becoming outdated.

Does this mean it’s dangerous to use tools like ChatGPT?

Of course not, yet as with every Large Language Model (LLM), we shall carefully consider how to ask queries (how to prompt), to what extent we can share the context of the business, and how to benefit and work on the results from such tools, then use it on our closed environment and detailed business context.

As a result, to trust AI models, we need to create acceptable, actionable, agile norms through a responsible framework of development, deployment, monitoring, and evaluation from NOW. And we need to be proactive before witnessing the capabilities and risks in our lives.
Moreover, avoid overregulating it on the organizational or societal level, as this will affect creative, innovative opportunities that emerge.

And always remember, humans are and will always be more creative, innovative, compassionate, and empathetic than any AI system if we are utilizing our singularities and aware of the societal disruption we are moving toward. So always look for opportunities that make us technology-liberated at all times.

--

--

Abeer albashiti

YOU can consider me as an enabler who loves to connect with people and lives a life of service :)