Artificial Intelligence (AI) Policy – Made to Help
This is our AI policy

Artificial Intelligence (AI) Policy – Made to Help
Policy Owner: CEO Made to Help
Policy Approver: Executive Leadership Team (ELT) / Board (as delegated)
Version: v1.0
Effective Date: 16th April 2026
Next Review Date: 30th June 2026
Version History
Rele Notic
Version
Date of Effect
Amendment Details
Amended By
1.0
April 2026
Initial Release
CEO Made to Help
1.1
June 2026
Scheduled policy review
Policy Development and Use of Artificial Intelligence
This policy was developed with the assistance of artificial intelligence tools to support drafting, structuring, and alignment with recognised best-practice frameworks.
All content has been reviewed, validated, and approved by Made to Help subject matter experts and Executive leadership. Responsibility for the content, interpretation, and application of this policy rests solely with Made to Help.
1. Purpose
This Artificial Intelligence (AI) Policy establishes the principles, behaviours, and expectations, for the ethical, responsible, secure, and effective use of artificial intelligence across the Made to Help business.
AI presents significant opportunities to enhance education, training, operational efficiency, member experience, and decision-making. At the same time, AI introduces new and heightened risks relating to ethics, privacy, security, bias, accountability, and trust. This policy provides a clear and consistent foundation for balancing innovation with responsibility. For the purposes of this policy, AI includes Automation and automation tools.
This policy is intended to function as Made to Help’s foundational AI operating rulebook. It sets out what staff can and cannot do with AI, the standards they must follow, and the safeguards required. It is deliberately written to be practical, accessible, and easy to apply in day-to-day work.
Specifically, this policy aims to:
• Protect the rights, safety, and interests of Made to Help members, registrars, staff, and the broader community.
• Enable responsible, value-adding use of AI to support Made to Help’s strategic objectives and service delivery.
• Reduce ambiguity and risk by clearly defining acceptable, restricted, and prohibited AI use.
• Promote transparency, accountability, and contestability in AI-assisted decisions.
• Establish a consistent, risk-based approach to identifying, assessing, managing, and monitoring AI risks.
• Build organisational confidence, capability, and trust in AI.
2. Scope
This policy applies across the entire Made to Help enterprise and covers:
• All Made to Help employees, contractors, volunteers, and Board or Committee members.
• All AI systems, tools, models, and services that are developed, procured, configured, embedded, trialed, or used by or on behalf of Made to Help.
For this policy, an AI system is any technology that uses data to make inferences or generate outputs—such as predictions, recommendations, content, classifications, or decisions—with a degree of autonomy.
This includes, but is not limited to:
• Generative AI tools that create text, images, audio, video, or code.
• Machine learning and predictive analytics systems
• Decision-support tools and automated decision-making systems
• Conversational agents, virtual assistants, and chatbots
• Traditional rule-based automations or workflow tools that do not learn or infer
• Applies wherever sensitive/ personal/ private/ company IP is used within an AI tool
This policy does not apply to:
• Standard spreadsheet calculations or formulae
• Business intelligence dashboards that present descriptive analytics only
Where there is uncertainty about whether a tool or system falls within scope, staff and contractors must seek guidance from the AI Policy Owner before use.
3. Policy Statements
The following policy statements define Made to Help’s expectations for the responsible design, adoption, and use of AI. These principles must guide all decisions relating to AI and be applied in conjunction with Made to Help’s broader risk management, privacy, security, and technology governance frameworks.
3.1 Ethical and Human-Centred Use
AI systems must be used in an ethical, lawful, and human-centred manner. AI must support and augment human decision-making, not replace it where professional judgement, empathy, or accountability are required.
AI must not be used in ways that deceive, manipulate, misrepresent authorship, or undermine trust. Where AI contributes to content, analysis, or recommendations, this contribution must be understood and appropriately disclosed.
All AI use must align with Australia’s AI Ethics Principles and Made to Help’s values.
3.2 Clear Accountability
Each AI system must have an identified AI System Owner who is accountable for the system across its full lifecycle, including design, procurement, configuration, deployment, operation, annual governance review, ongoing periodic monitoring, and decommissioning.
Accountability must be established before any AI system is developed, trialed, or used. Where AI is supplied or supported by third parties, roles, responsibilities, and liabilities must be clearly defined and documented across the supply chain.
3.3 Risk-Based Governance and Impact Assessment
All AI use cases must be assessed using a risk-based approach proportionate to their potential impact on individuals, the organisation, and the community.
AI systems must undergo screening, risk assessment, and approval in accordance with the Made to Help AI Governance before use. AI must not be deployed where risks exceed Made to Help’s risk appetite or where appropriate mitigations cannot be implemented.
Additional scrutiny is required for AI systems that:
• Influence decisions affecting individuals or groups
• Use personal, sensitive, or health information
• Impact vulnerable or marginalised populations
3.4 Quality, Reliability, and Security
AI systems must be fit for purpose, reliable, and secure throughout their lifecycle. Appropriate controls must be implemented based on the level of risk, including technical, procedural, and organisational safeguards.
Controls may include, but are not limited to:
• Role-based access controls and least-privilege principles
• Data minimisation, encryption, and secure storage
• Network isolation or segregation where feasible
• Prompt, input, and output logging for auditability
• Ongoing and periodic performance monitoring
• Content moderation and filtering
• Usage limits, spend caps, and monitoring of consumption
• Controlled model updates, versioning, and rollback procedures
AI systems handling personal, sensitive, or health information must comply with Made to Help’s privacy, data governance, and information security obligations.
3.5 Fairness, Inclusion, and Accessibility
AI systems must be designed and used in ways that promote fairness, inclusion, and accessibility. Reasonable steps must be taken to identify and mitigate bias in data, models, and outputs.
AI must not result in unlawful or unfair discrimination and must support Made to Help’s commitment to diversity, inclusion, and accessibility.
3.6 Transparency and Contestability
Made to Help is committed to transparency in its use of AI. Where AI materially influences decisions, outcomes, or interactions, Made to Help will provide appropriate notice and information proportionate to the level of risk and impact.
Mechanisms must exist to:
• Document AI-assisted decisions
• Enable explanation and review where appropriate
• Allow individuals to request human reconsideration or challenge outcomes
3.7 Human Oversight and Control
Human oversight must be maintained over all AI systems. The level of oversight must increase with the AI system's potential impact and risk.
Humans must be able to intervene, override, pause, or deactivate AI systems where necessary. Where AI supports critical services, manual fallback processes must be maintained.
3.8 Environmental Sustainability
Consideration should be given to model efficiency, infrastructure usage, vendor sustainability practices, and the environmental impact of AI systems, consistent with Australian Government guidance.
4. Governance and Responsibilities
This policy is supported by the governance structures defined in the Made to Help Governance Framework, including:
• Executive Leadership Team
• Risk Management Committee
All employees and contractors must:
• Comply with this policy and related frameworks
• Complete required AI training if applicable
• Use only approved AI tools for Made to Help work
• Report AI related incidents, risks, or unexpected behavior related incidents
5. Permitted, Restricted, and Prohibited Use
Permitted Use
AI may be used where:
• The use case is approved or falls within pre-approved low-risk categories
• No personal, sensitive, or confidential data is exposed
• Humans review outputs before use
• No client data is to be uploaded to any AI platform, other than the Made to Help AI Agent
Restricted Use
AI use requires explicit approval where:
• Personal, sensitive, or health information is involved
• AI influences operational, educational, or role based decisions
• Generative AI produces externally published content
Prohibited Use
AI must not be used to:
• Make fully automated decisions with significant impact on individuals
• Carry out any activities, especially process sensitive data, if unapproved
• Circumvent governance, security, or procurement controls
• Generate misleading, discriminatory, or deceptive content
6. Incident Management
AI related incidents, breaches, or unexpected behaviour must be reported in accordance with Made to Help’s incident management procedures.
Made to Help will maintain the capability to suspend or deactivate AI systems and invoke manual workarounds where required.
7. Training and Awareness
Made to Help will provide role-appropriate AI training to build awareness of capabilities, limitations, risks, and obligations. Training will be refreshed periodically and provided to positions which use relevant AI tools.
8. Review and Continuous Improvement
This policy will be reviewed annually, or earlier if triggered by:
• Significant AI incidents
• Material regulatory or legislative change
• Emergence of new high-impact AI technologies
All substantive changes require approval by the designated policy approver.
9. References and Supporting Documents
Category
Relevance
Document
Link
References
Department of Industry,
Science and Resources -
Australia’s AI Ethics
Principles
Australia’s AI Ethics Principles | Department of Industry Science and Resources
References
Digital Transformation
Agency - Policy for the
Responsible Use of AI in
Government (Australia) v2.0
Policy for the responsible use of AI in government - Version 2.0 | digital.gov.au
References
National Artificial Intelligence Centre – Voluntary AI Safety
Standard (VAISS)
Guidance for AI Adoption | Department of Industry Science and Resources
References
MADE TO HELP - Privacy
Policy
References
MADE TO HELP - Risk
Management Policy
References
ESG
Environmental and
Sustainability - Australian
Senate Select Committee on
Adopting Artificial
Intelligence (AI)
The Senate: Select Committee on
Adopting AI
Appendix A – Definitions
For this policy, the following definitions apply. These definitions are intended to promote a common understanding across Made to Help.
Artificial Intelligence (AI)
A set of computational techniques that enable systems to perform tasks usually requiring human intelligence, including learning from data, recognising patterns, generating content, making predictions, or supporting decision-making.
AI System
Any technology that uses data to make inferences or generate outputs— such as predictions, recommendations, classifications, content, or decisions—with a degree of autonomy. This includes systems developed internally, procured from third parties, or embedded within broader platforms.
Generative AI
A subset of AI that can create original content, including text, images, audio, video, or software code, in response to prompts or inputs.
Examples include large language models and image generation tools.
Automated Decision-Making
(ADM)
A process in which an AI system makes or materially influences a decision without direct human involvement at the point of decision. This includes decision-support systems where AI outputs are heavily relied upon
Human-in-the-Loop
A governance approach where humans retain active oversight of AI systems, including the ability to review, intervene, override, or halt AIgenerated outputs or decisions.
AI System Owner
The individual accountable for an AI system across its full lifecycle, including compliance with this policy, risk assessments, approvals, monitoring, and decommissioning.
Shadow AI
The use of AI tools or systems that have not been approved or assessed, including personal or public AI tools used for Made to Help work without authorisation.
High-Risk AI
AI systems that, due to their purpose, data use, or potential impact, present elevated risks to individuals, groups, or the organisation, including systems that influence decisions affecting rights, access to services, professional standing, or wellbeing.
Appendix B – Legal and Regulatory Alignment
This policy is designed to align with relevant Australian laws, regulatory expectations, and recognised standards for the safe, ethical, and responsible use of AI.
Australian Privacy and Data Protection Law
AI systems that collect, use, store, or generate personal information must comply with the Privacy Act 1988 (Cth) and the Australian Privacy Principles (APPs). This includes obligations relating to lawful and fair collection, purpose limitation, data minimisation, security safeguards, transparency, access, and correction.
Where AI systems involve health information or other sensitive information, additional protections apply. AI use cases must be assessed to ensure compliance with all applicable Commonwealth, state, and territory privacy and health records legislation.
Made to Help will meet its obligations under the Notifiable Data Breaches (NDB) scheme in the event of an AI-related incident likely to result in serious harm, including timely assessment, notification to affected individuals, and reporting to the Office of the Australian Information Commissioner.
Automated Decision-Making and Transparency
This policy anticipates reforms to Australia’s privacy and consumer protection framework that increase transparency obligations for automated decision-making.
Where AI systems materially influence decisions that affect individuals, Made to Help will implement proportionate measures to:
• Provide notice that AI is being used
• Enable meaningful explanation of AI-assisted decisions
• Support human review and contestability where appropriate
AI Ethics and Safety Standards
This policy aligns with Australia’s AI Ethics Principles, which underpin Made to Help’s commitment to human-centred values, fairness, accountability, transparency, and safety.
The policy is also aligned with the Voluntary AI Safety Standard (VAISS) issued by the National Artificial Intelligence Centre, providing a nationally consistent baseline for risk-based AI governance, assurance, and continuous monitoring.
Information Security and Cyber Security
AI systems must comply with Made to Help’s information security requirements and reflect Australian cyber security best practice, including principles consistent with the Australian Signals Directorate (ASD) Essential Eight, where applicable.
This includes secure system design, access controls, logging, monitoring, incident response, and vendor security assurance.
Contracting and Third-Party Risk
Where AI systems are procured from third parties or embedded in vendor platforms, contractual arrangements must address:
• Data ownership and use
• Confidentiality, privacy and security controls
• Model updates and change management
• Audit and assurance rights
• Incident notification and remediation
Environmental and Sustainability Considerations
Consistent with the Australian Senate Select Committee on Adopting Artificial Intelligence,
Made to Help recognises the environmental impact of AI technologies. AI adoption should align with Made to Help’s sustainability commitments and environmental risk appetite, including consideration of infrastructure efficiency and vendor practices.
Appendix C – Policy Review and Assurance
Compliance with this policy will be monitored through Made to Help’s established risk management, audit, and assurance processes.
This policy will be reviewed at least annually, or sooner where triggered by:
• Significant AI-related incidents
• Material changes to law or regulation
• Introduction of new, high-impact AI technologies
All substantive amendments require approval in accordance with Made to Help’s delegations of authority.