1. ChatRegs23
ChatRegs23
  • Definitions
    • Artificial intelligence
      • Artificial intelligence (AI) refers to an engineered system that generates predictive outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives or parameters without explicit programming. AI systems are designed to operate with varying levels of automation.
      • Source
    • Machine learning
      • Machine learning are the patterns derived from training data using machine learning algorithms, which can be applied to new data for prediction or decision-making purposes
      • Source
    • Generative AI models
      • Generative AI models generate novel content such as text, images, audio and code in response to prompts.
      • Source
    • Large Language Model (LLM)
      • A large language model (LLM) is a type of generative AI that specialises in the generation of human-like text.
      • Source
    • Multimodal Foundation Model (MfM)
      • A Multimodal Foundation Model (MfM) is a type of generative AI that can process and output multiple data types (e.g. text, images, audio).
      • Source
    • Automated Decision Making (ADM)
      • Automated Decision Making (ADM) refers to the application of automated systems in any part of the decision-making process. Automated decision making includes using automated systems to:
        • make the final decision
        • make interim assessments or decisions leading up to the final decision
        • recommend a decision to a human decision-maker
        • guide a human decision-maker through relevant facts, legislation or policy
        • automate aspects of the fact-finding process which may influence an interim decision or the final decision.
        Automated systems range from traditional non-technological rules-based systems to specialised technological systems which use automated tools to predict and deliberate.
      • Source
  • Australia’s AI Ethics Principles
    • Human, societal and environmental wellbeing
      • AI systems should benefit individuals, society and the environment.
      • Source
    • Human-centered values
      • AI systems should respect human rights, diversity, and the autonomy of individuals.
      • Source
    • Fairness
      • AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
      • Source
    • Privacy protection and security
      • AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
      • Source
    • Reliability and safety
      • AI systems should reliably operate in accordance with their intended purpose.
      • Source
    • Transparency and explainability
      • There should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI system is engaging with them.
      • Source
    • Contestability
      • When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system.
      • Source
    • Accountability
      • People responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
      • Source

Made with Padlet