Responsible AI

IAT - TAFE NSW - Course Notes

Great to see TAFE NSW - Institute of Applied Technology (IAT) offering an introductory Microskills course on Responsible AI.

This short course addresses the challenges and concerns surrounding privacy, liability, fairness, transparency, and accountability in the use of AI. It aims to build trust in AI processes and is accessible to individuals without prior programming or computer science experience.

It’s a bit Microsoft focused, but it’s a great start to get people thinking about the ethical and societal impact of AI.

Notes on Responsible AI

In the pursuit of ethical and responsible AI usage:

  • Organizations involved in the development and utilization of AI bear the responsibility of recognizing and addressing any unintended ramifications arising from its use.
  • AI should be developed and employed in accordance with well-defined ethical principles.

Principles guiding AI implementation

Given the far-reaching societal impact of AI, it is incumbent upon companies, governments, and researchers to carefully contemplate and minimize any unforeseen negative consequences. Several organizations, including Microsoft and Deloitte, have established internal policies and guiding principles to govern the development and utilization of AI technology.

Microsoft has developed the following 6 principles to support AI development and use:

  1. fairness
  2. reliability and safety
  3. privacy and security
  4. inclusiveness
  5. transparency
  6. accountability.

AI Governance

Australia’s 8 Artificial Intelligence (AI) Ethics Principles designed to ensure AI is safe, secure and reliable:

  1. human, societal and environmental wellbeing
  2. human-centred values
  3. fairness
  4. privacy protection and security
  5. reliability and safety
  6. transparency and explainability
  7. contestability
  8. accountability.

These principles serve as guiding principles rather than strict requirements. As such, your organization has the flexibility to adopt only a subset of these principles or modify them to align with your specific context.

Ethics committee benefits:

  • Enforcing internal governance: The ethics committee ensures that responsible AI governance is established within the organization.
  • Impartial feedback and guidance: Project teams can receive unbiased feedback and guidance on how to effectively mitigate risks or maximize benefits when utilizing AI.
  • Diverse expertise and comprehensive risk management: The committee’s diverse range of expertise and perspectives allows the organization to identify and address complex ethical issues associated with AI technology.
  • Building trust in AI products and services: Ethics committees enhance transparency by showcasing how AI is used within the organization, demonstrating the organization’s values and proactive approach to advancing AI governance, thereby fostering trust among both the public and internal stakeholders.

Chief AI ethics officer: establish robust ethical values and accountability frameworks within the organization. They also ensure that all personnel involved in AI activities are knowledgeable about and uphold the organization’s guiding principles.

Ethics office: offers guidance and support regarding ethics and conduct to executives and staff members. Comprising individuals from various levels within the organization, the office is united in their commitment to upholding ethical principles.

Ethics committees / advisory boards: tasked with monitoring and approving AI projects while also establishing standardized decision-making processes. The committee consists of experts from diverse fields who collaborate to fulfill these responsibilities.

Centralised vs Decentralised model

Centralizing AI governance advantages:

Consistent governance practices in AI are crucial for organizations to ensure effective and responsible use of AI technologies. Here are the key points:

Bringing Together Expertise: Centralized governance brings personnel with AI development and governance expertise under one umbrella. This enables strong control over policies and processes, ensuring consistent AI governance practices across different business units.
Efficiency and Participation: A specialized governance team takes charge of overseeing and enforcing governance throughout the organization. This centralization eliminates the need for individual departments to develop their own AI governance, leading to greater efficiency. It also ensures active participation and compliance with security and technical requirements across departments.
Knowledge Sharing: With closely collaborating governance teams and compatible governance solutions and processes, centralization facilitates convenient and efficient knowledge sharing.

Challenges of the centralized model:

  • Lack of adaptability: The larger size and complex distribution of governance responsibilities in a centralized model can hinder swift responses to changes. This rigidity may expose teams to the risk of regulatory capture and impede their ability to adapt to or take advantage of technological advancements.
  • Biases in group decision-making: Centralization can stifle creativity and innovative thinking within teams due to inefficient distribution of responsibilities and governance activities. This may result in biases creeping into group decision-making processes.
Decentralised model

Decentralisation is a practical approach that caters to diverse AI needs, responsibilities, and strategies within an organization. It offers the following benefits:

  • Adaptability and speed: Decentralised business units can swiftly adapt and respond to new developments in technical and regulatory domains, unhindered by centralized governance requirements.
  • Creativity: Decentralised business units have the freedom to develop approaches tailored to their specific needs and environments, without concerns about the distribution of responsibilities and governance activities.

Challenges of decentralised model

While the decentralised model has advantages, it also presents challenges:

  • Limited cross-sharing of knowledge: Decentralised teams operate independently, developing context-specific strategies and solutions. This hampers the adoption and refinement of these approaches by other teams within the organization.
  • Contradictory strategies and policies: Decentralised teams are prone to conflicting mandates and inadequate communication compared to their centralised counterparts. Consequently, they may unintentionally develop and implement contradictory and incoherent strategies and policies.
  • Skills shortages: The decentralised model faces the significant risk of severe skills shortages. Organizations struggle to find and distribute expertise across different units.
A hybrid model:

Organizations can derive advantages from adopting a hybrid model, including:

  • Knowledge sharing: The governance structure of a hybrid model facilitates easy knowledge sharing and learning among departments. The use of a common platform across business units ensures consistent adherence to central standards.
  • Flexibility and adaptability: In the dynamic AI landscape, hybrid models offer greater flexibility and adaptability. Striking the right balance between the two governance models allows organizations to effectively accommodate new initiatives and evolve according to their specific needs and goals.

Centralised model:
The centralised model enables organizations to exert influence over larger entities, shaping government policies and fostering greater participation in AI governance discussions, even for regimes with limited resources.

Decentralised model:
In the decentralised model, organizations can respond swiftly to changes, embrace creativity and innovation, and encourage open and transparent conversations about improving and leveraging AI.

Hybrid model:
The hybrid model brings together the strengths of both centralised and decentralised approaches. It fosters knowledge sharing through a common platform across business units and offers flexibility to accommodate new initiatives, tailored to the specific requirements of the organization.

Third-party AI systems

A third-party AI system, also known as ‘off-the-shelf AI’ or AI-as-a-service (AIaaS), offers pre-designed algorithms to address specific tasks. Similar to software-as-a-service (SaaS), it seamlessly integrates into business processes and undergoes constant management and optimization. Common uses of third-party AI systems include image recognition, recommendation engines, natural language processing, and fraud detection. Before utilizing third-party AI, organizations should undertake the following activities:

  1. Identify areas where AI can enhance effectiveness.
  2. Ensure data collection from relevant sources.
  3. Develop an AI-based solution for algorithm-based decision-making.
  4. Implement the solution once developed.

Benefits and challenges of third-party AI systems

Using third-party AI systems entails benefits and challenges that assist in determining whether to adopt an off-the-shelf solution or build an in-house system. Notable advantages and challenges include:

Benefits:

  • Reduced cost and implementation time: Ready-made infrastructure and pre-trained algorithms minimize the need for building from scratch, saving time and resources.
  • Scalability assurance: Vendors prioritize scalability, allowing the system to grow with the organization and accommodate future demands.

Challenges:

  • Limited control over the system: Readymade infrastructure and algorithms restrict customization options. Extensive customization may require searching for vendors offering such options.
  • Security compliance concerns: Utilizing third-party AI involves sharing data with vendors, including potentially sensitive or confidential information. Careful consideration of data processing, storage, and confidentiality is essential.

First-party AI systems

Well-designed first-party AI systems, accompanied by risk identification during development, offer various benefits for businesses, such as:

  • Customization and flexibility: In-house AI systems can be tailored to specific organizational needs.
  • Security: Data remains secure and confidential as there is no need to share it with third-party vendors, particularly important for sensitive cases with privacy and personal information risks.
  • Intellectual property ownership: In-house AI systems become valuable assets, granting organizations a competitive edge. Ownership rights allow potential opportunities to sell the AI system as a solution to other organizations.
Challenges of developing in-house AI systems

The challenges associated with developing and implementing an in-house AI system primarily stem from the need to create it from scratch and continually oversee its monitoring, maintenance, and improvement. Key challenges of first-party AI systems include:

  • Inadequacy of expertise: The development and successful implementation of an AI system require specialists or technical experts with the necessary skills and knowledge. However, many organizations struggle to recruit individuals who possess the required expertise, hindering project completion.

  • Time commitment: Building and maintaining an AI system internally demands a significant time investment, particularly if the organization needs to hire new staff with AI-related skills.

  • Lack of a delivery-oriented approach: Third-party AI systems undergo rigorous testing to ensure accuracy and the delivery of expected results. In contrast, in-house solutions often overlook essential features such as scalability, resulting in a final product that may lack delivery-focused capabilities.

Bringing AI culture to developers

To cultivate an AI culture among developers and ensure responsible AI deployment, organizations should:

  • Align AI design with ethical principles through cross-functional collaboration.
  • Establish a risk prioritization scheme and consult the ethics office or committee for ethical concerns.
  • Provide tools to detect inefficiencies and safeguard against biases and safety gaps.
  • Establish clear lines of accountability and responsibilities based on guidelines.

Compliance with guiding principles of AI

To comply with AI guiding principles, organizations should:

  • Form an AI advisory board for neutral and critical feedback.
  • Review policies and standards to address AI’s unique characteristics.
  • Maintain a centralized inventory of AI projects to assess risks.
  • Provide training on AI capabilities and risks.
  • Invest in monitoring and evaluation tools.
  • Embrace challenges for iterative improvement.

Successfully implementing and deploying AI

To achieve successful AI implementation, leaders should:

  • Foster an environment where risk-taking and learning from failure are encouraged.
  • Facilitate information sharing and risk management across the organization.
  • Establish high standards, methodologies, and processes for iterative AI development.

Trust in AI

Trust is crucial for the successful deployment and use of AI. Building trust in AI involves implementing governance systems that ensure responsible AI practices. To establish trust in an AI system, it should offer:

  • Transparency: Providing clear visibility into the system’s operations and decision-making processes.
  • Explanation of decisions: Offering reasons and justifications for the decisions made by the AI system.
  • Privacy: Ensuring the protection of user data and maintaining privacy standards.
  • Robustness: Demonstrating reliability and resilience in various settings.

Without these components and trust in AI, organizations will struggle to implement AI effectively, and third-party vendors may face challenges in selling their AI solutions. Investing in enhancing AI functions and implementing strong governance tools is crucial for building and maintaining trust in AI.

While governance systems and compliance processes contribute to building trust in AI, it is the responsibility of the chief decision maker to foster a culture of responsible AI, ensuring thoughtful and trustworthy use of AI. Trust in AI goes beyond its ability to serve business needs; it also encompasses factors such as trustworthy use, explainability, functionality evaluation, privacy, and resistance to risk and unpredictability in different settings.

Putting principles into practice at Microsoft
Google’s AI Principles
SAP’s Guiding Principles for Artificial Intelligence

The Commonwealth Scientific and Industrial Research Organisation’s (CSIRO’s) ‘Artificial Intelligence Australia’s Ethics Framework’ discussion paper outlines how to establish ethics governance in your organisation.

Engineering Responsible AI

Risks of AI

AI technology carries various risks that need to be addressed to ensure responsible and beneficial deployment. Key risks include:

  • Biased algorithms: AI algorithms heavily rely on data, and if the data itself is biased, it can result in skewed outcomes that perpetuate existing inequalities and disadvantage minority or vulnerable groups.
  • Sphere of control: As AI advances, there is a concern about the level of influence and autonomy granted to AI systems. While increased autonomy can enhance efficiency, it may come at the expense of empathy and human consideration, particularly in making sensitive decisions.
  • Privacy: AI’s ability to process private data for system optimization raises concerns about the potential compromise of personal privacy and the misuse of sensitive information for political purposes.
  • Liability: Determining responsibility for AI mistakes and unintended actions poses legal challenges, as it is often unclear who should be held liable: the user, the AI creator, or the AI system itself.
  • False information: AI’s capability to generate realistic fake content, such as tweets, images, and voices, presents a risk of widespread misinformation, manipulation, and blackmail, making it difficult to discern truth from falsehood.

Case study: Tay
The Microsoft chatbot, Tay, serves as an example of the risks associated with AI. Designed to mimic human communication, Tay became a platform for racism and hate speech due to the data it received, highlighting the potential for unintended consequences in AI systems.

6 Principles of AI use

1. Fairness

AI systems must be developed to ensure fair treatment for all individuals and avoid unjust disparities. Supporting this principle involves:

  • Using human judgment: Employing sound human judgment to replace AI decisions and taking accountability for subsequent decisions that affect others.
  • Addressing bias: Recognizing potential biases and their impact on AI-based recommendations, utilizing training datasets that encompass societal diversity.
  • Designing unbiased models: Creating AI models that can learn and adapt without developing biases over time.
  • Using fairness checklists: Employing tools like the AI Fairness checklist and exploring Python packages like FairLearn to assess bias.

2. Reliability and Safety

To establish trust, AI systems should operate safely, consistently, and reliably. Supporting this principle includes:

  • Rigorous testing: Conducting thorough testing during development to ensure AI systems can respond safely in unexpected scenarios and perform as expected.
  • Ongoing maintenance: Regularly maintaining and protecting AI systems to prevent unreliability and inaccuracies.
  • Human judgment: Acknowledging that human judgment is responsible for deciding when and how to use AI, and identifying any blind spots or biases.
  • Monitoring data drift: Monitoring changes in data patterns and adapting models to maintain accuracy, leveraging tools like Azure Machine Learning, InterpretML, and Error Analysis.
  • Utilizing relevant frameworks and tools: Exploring resources such as the Pandora debugging framework and Microsoft AirSim for enhanced reliability.

3. Privacy and Security

As AI becomes pervasive, protecting privacy and security is essential. Supporting this principle involves:

  • Adhering to privacy laws: Ensuring AI systems and developers comply with transparency requirements regarding data collection, use, and storage.
  • Customer control: Allowing customers to have control over how their information is used.
  • Robust compliance processes: Investing in strong compliance processes and systems to safeguard data used by AI.
  • Leveraging guidelines and technologies: Considering resources like Microsoft’s Securing the Future of Artificial Intelligence and Machine Learning, as well as technologies such as Microsoft SEAL, Counterfit, SmartNoise, Presidio, Azure confidential computing, and Private Data Sharing Interface, to enhance privacy and security.

https://www.microsoft.com/en-us/research/project/microsoft-seal/
https://smartnoise.org

4. Inclusiveness

AI should be designed to benefit a diverse range of individuals and address potential barriers. Supporting this principle involves:

  • Inclusive design practices: Using inclusive design to identify and address potential exclusionary barriers in product environments.
  • Opportunities for innovation: Removing barriers to foster innovation and create better experiences that benefit everyone.
  • Utilizing inclusive design resources: Exploring Microsoft’s inclusive design practices and toolkit for guidance.

5. Transparency

Transparency is essential to help people understand how AI is used and to build trust. Supporting this principle includes:

  • Improving intelligibility: Enhancing the understandability of AI systems and their purpose.
  • Stakeholder understanding: Ensuring stakeholders comprehend how AI systems work and why they are utilized.
  • Promoting honesty and openness: Encouraging transparency about the use of AI systems.
  • Utilizing transparency tools: Leveraging resources such as datasheets for datasets and the InterpretML open-source package for transparency.
  • Exploring model interpretability: Examining Azure Machine Learning’s model interpretability feature.

6. Accountability

Accountability is crucial to hold creators and users of AI systems responsible for their operations. Supporting this principle involves:

  • Internal review bodies: Establishing internal bodies to provide oversight and guidance on AI systems, setting standards and best practices.
  • Addressing sensitive cases: Ensuring human involvement in decision-making and implementation, especially in sensitive situations that impact access to vital services, create risks, or infringe on human rights.
  • Utilizing accountability resources: Exploring Microsoft’s HAX workbook, interaction guidelines, and the benefits of datasheets for datasets.
  • Managing the model development process: Employing MLOps in Azure Machine Learning to document and manage the entire model development process.

Tools for Responsible AI Engineering

Audit AI

  • Measures and mitigates discriminatory patterns in training data and predictions.
  • Makes machine learning algorithms fairer, helps to identify bias.
    https://github.com/pymetrics/audit-ai

What-If Tool (WIT)

AI Explainability 360

PwC’s Responsible AI

  • Customisable frameworks, tools and processes.
  • Use AI ethically throughout the design and implementation process.

MS InterpretML

  • A package by Microsoft including different techniques for machine learning interpretability.
  • Understand your model’s global behaviour, or understand the reasons behind individual predictions. https://www.microsoft.com/en-us/research/uploads/prod/2020/05/InterpretML-Whitepaper.pdf Intelligibility, also referred to as interpretability, plays a crucial role in ensuring transparency in AI systems. It emphasizes the need for individuals to comprehend, monitor, and respond to the technical behavior of AI systems. While the terms “intelligibility” and “interpretability” are often used interchangeably, they both underscore the importance of making AI systems understandable to humans.

Fairlearn

SmartNoise

Case study: TD Bank Report on Responsible AI in Financial Services

TD Bank conducted a survey on Canadians’ perspectives on AI and released a report titled “Responsible AI in Financial Services”. The report includes insights from academics, government staff, and other experts.

Key findings indicate that consumers acknowledge the value of AI and its potential to enhance their lives. However, there is a desire for improved understanding of how AI is utilized. Concerns exist regarding the rapid advancement of AI and the associated risks it may pose.

Explainability, Bias, Diversity, and Responsible Use in AI

The TD Bank report highlights key aspects related to responsible AI adoption:

  • Explainability: Addressing the limitations of AI technology in explaining decision-making processes. Three areas of focus include identifying what needs to be explained, expecting AI to evolve over time, and fostering consensus on AI capabilities.
  • Bias: Emphasizing the control of bias and reevaluating transparency, fairness, and accountability in an AI-centric world. Key considerations include multiple meanings of bias, the influence of data on bias in models, and the unexpected manifestations of bias.
  • Diversity: Recognizing the importance of diversity and inclusion throughout AI implementation to better cater to diverse audiences. Strategies include building diverse AI teams, utilizing representative datasets, and promoting multicultural workforces.
  • Techniques for responsible use: TD Bank suggests various techniques for ensuring responsible AI use. These include prioritizing customers and colleagues in decision-making processes, encouraging ideas from all levels of the organization, and fostering a positive and inclusive environment for the widespread adoption of AI benefits.

Procedures and policies

Implications, Approaches, and Impact Assessment in Responsible AI

Responsible AI usage is crucial to avoid undesirable consequences for organizations and society at large. The risks associated with incorrect or irresponsible AI application encompass operational, financial, regulatory, reputational, privacy, discriminatory, accidental, and political manipulation concerns.

Various governance systems exist to promote responsible AI, including the role of a chief ethics officer, an ethics office, an ethics committee or advisory board, centralized and decentralized approaches, and hybrid models.

Impact assessment is essential to ensure trustworthy AI. According to the European Union’s Ethical Guidelines for Trustworthy AI, an AI or automated decision must be legitimate, ethical, and robust to be deemed trustworthy. Assessing the potential unwanted consequences of AI and their impact on individual and group rights is crucial. Applying regulatory requirements to AI systems posing certain risks is necessary for effective governance and regulation.

The Automated Decision Impact Assessment (ADIA) Open Loop, a collaborative initiative supported by Facebook, has developed a policy prototype for the Automated Decision Impact Assessment (ADIA) after the need for risk assessment emerged. The ADIA is a tool that helps organizations identify and mitigate risks associated with automated decision-making systems.

Awareness, Training, and Responsible AI Practices

Raising awareness and providing training on responsible AI practices is crucial for organizations to effectively utilize AI while mitigating ethical and legal risks. Employees need to understand the implications and principles of ethical AI, identify potential issues, and communicate AI-related information to customers.

Employers should consider the following:

  • Prepare employees for the introduction of AI by providing appropriate training to maximize workforce productivity.
  • Educate employees about AI, dispelling fears, mistrust, and misconceptions.
  • Ensure that technical employees, such as engineers, are aware of the ethical implications of AI, not just its technical aspects.
  • Make customers aware of how AI is used and train customer-facing employees to address AI-related issues in their interactions.
  • Assign employees to monitor AI systems for ethical, legal, and regulatory compliance, in addition to technical solution-oriented roles.

Google’s Ethical Training and Continuous Improvement
Google has implemented an AI principles problem-spotting training course for its employees to identify potential ethical issues in AI usage. The training targets their AI principles and aims to ensure ethical development and use of AI in their products. It has become mandatory for customer-facing Cloud team members to assess whether AI applications may have negative consequences and harm.

Monitoring and Validation Tools
Azure Machine Learning provides data drift monitoring, which detects changes in data distribution that can impact the accuracy and performance of machine learning models. This feature helps track data changes over time and adapt models to maintain accuracy.

Error Analysis and Independent Validation
Error Analysis toolkit helps identify and improve model accuracy by analyzing cohorts with high error rates and visualizing the distribution of error rates. Independent validation using separate data sets is crucial to evaluate AI systems’ performance and reliability beyond the training environment.

Benefits of Centralised AI Inventory

Creating a centralised AI inventory offers several advantages, including:

  1. Collaboration and Efficiency: It enables collaboration between data science and IT teams, accelerating the development and deployment of models.
  2. Comprehensive Monitoring and Validation: All monitoring, validation, and governance tools for machine learning models are available in one centralized location.
  3. Ethical Focus: It ensures that everyone involved remains committed to creating ethical AI systems throughout the development and deployment stages.
  4. Standardised Processes: Teams can establish standardized processes to consistently deliver AI models.
  5. Governance and Compliance: It promotes governance across various models and assets, ensuring ongoing adherence to security, privacy, and compliance standards.

AI in business

Adoption of AI: Risks vs Opportunities

The increasing adoption of AI poses a decision for organizations on how to proceed, considering the associated risks and opportunities.

Businesses can adopt AI in three ways:

  1. Risk-Averse Stance: Businesses prioritize avoiding risk and wait for clearer regulations, potentially missing out on digital transformation.
  2. Balanced Stance: Businesses strike a balance between opportunities and risks, implementing risk and compliance management practices before AI transformation.
  3. First Movers’ Stance: Businesses prioritize innovation, accepting potential compliance and financial risks by modifying AI to conform to future regulations.

Responsible AI: Risks, Case Studies, and Shared Benefit Principle

Adoption of AI presents decision-makers with risks and opportunities, requiring them to choose from three stances:

  1. Risk-Averse Stance: Businesses avoid risk and wait for clearer regulations, potentially falling behind digital transformation.
  2. Balanced Stance: Businesses strike a balance between opportunities and risks, implementing risk and compliance management before AI transformation.
  3. First Movers’ Stance: Businesses prioritize innovation over risk, exposing themselves to compliance and financial risks while modifying AI to conform to future regulations.

AI in the Insurance Industry: AI adoption in the insurance industry offers benefits such as enhanced decision-making, increased productivity, cost reduction, and improved customer experiences. The industry shifts from “detect and repair” to “predict and prevent” approaches. Case Study: State Farm: State Farm implemented the Dynamic Vehicle Assessment Model (DVAM) to predict total losses in car accident claims, improving customer experience and reducing the total loss process time. State Farm’s AI governance system ensured compliance with guiding principles. AI in the Technology Industry: AI is used in technology companies to improve security, manage data, detect patterns, diagnose issues, enhance customer support, and understand customer needs. Responsible AI management is crucial in demonstrating commitment to guiding principles. Case Study: Microsoft: Microsoft invested in AI development and implemented responsible AI initiatives. The company has centralised and decentralised governance systems to address responsible AI issues. Microsoft made ethical decisions in a facial recognition project to protect human rights.

Key Lessons:

  1. Collaboration across disciplines is essential for successful AI deployment.
  2. Robust governance processes and adherence to guiding principles are crucial.
  3. AI technology should be developed with responsible use in mind to avoid potential harm.

Shared Benefit Principle: AI technologies should aim to benefit and empower as many people as possible, addressing economic inequalities. AI should be used to promote global equality and distribute wealth and health equitably.

Guiding principles for responsible AI in business

To ensure responsible AI use in the business world, key questions will guide organizations in understanding responsible AI within their specific context.

The 6 guiding principles for responsible AI are fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Creating Your Guiding Principles
While the above 6 principles provide general guidance, organizations should develop their own guiding principles and uphold them throughout AI development and deployment. Tools like MLOps in Azure Machine Learning can assist in aligning decisions with guiding principles, ensuring they are embedded in the AI process.

Key Questions
Three key questions help organizations develop and deploy AI responsibly:

  1. How can a human-led approach drive value for the business?
  2. How will the organization’s foundational values shape the AI approach?
  3. How will AI systems be monitored to ensure responsible evolution?
Meta prompts - prompting for prompts
Older post

Meta prompts - prompting for prompts

Prompting ChatGPT for Midjourney prompts

Newer post

Identify guiding principles for Responsible AI - Notes

Microsoft Learn - Course Notes

Identify guiding principles for Responsible AI - Notes