top of page

SSA's 4 Zones

Activity Zone 1 : Workshops 

The Workshops zone provides a space for interactive sessions where attendees can engage in hands-on activities and group discussions to enhance their understanding of key topics. Each workshop is designed to foster collaboration, practical learning, and skills development.

  • Focus on interactive and hands-on sessions

  • Group discussions and collaborative activities

  • Practical learning and skills development

Activity Zone 2: Masterclasses

The Masterclasses zone offers in-depth sessions led by experts, aimed at providing advanced knowledge and specialized skills to participants. These sessions are ideal for those looking to deepen their expertise in specific areas.

Main Plenary 

The Main Plenary serves as the central hub for high-profile discussions and presentations. This stage hosts a variety of sessions, from expert talks to panel discussions, bringing together thought leaders to share insights and drive impactful conversations.

  • High-profile discussions and presentations

  • Includes Spotlight Programme, Expert Talks, Corporate Expertise Sharing

  • Focus on impactful conversations and thought leadership

Networking Space

A dynamic networking space where attendees can watch live broadcasts from COP29, allowing them to stay updated with real-time developments while engaging in informal discussions and networking opportunities.

1. Our Commitment

AlterCOP is committed to harnessing the power of Artificial Intelligence (AI) in a manner that is sustainable, responsible, and ethical

Sustainable: Minimising the negative impact of AI use on the environment, emphasising energy efficiency, data sustainability, and transparency in the operation of AI technologies
Responsible: Emphasising prudence, human oversight and verification for usage and outcomes
Ethical: AI use that does not infringe privacy or for unscrupulous ends, minimises bias or misrepresentation of people(s), topic(s), decision(s)

Recognising both the potential and inherent risks of AI technologies, especially in the context of climate action and social impact, we pledge to;

  • Minimise our utilisation of AI: Use AI where it may be needed, without trusting AI with significant portions of our work

  • Use AI systems only to advance our mission: Support AlterCOP's goals of fostering holistic perspectives on climate, driving climate solutions, and building resilient communities.

  • Use AI systems to prioritise well-being: Promote human flourishing, environmental health, and societal equity.

  • Uphold trust through responsible usage: Foster confidence and understanding among our internal and external stakeholders, including our volunteers, partners, sponsors, participants, and wider public.

As developments in AI are rapidly changing and AlterCOP is in the process of building its governance measures as a young organisation, this document offers draft policy guidelines aimed at;

  • Promoting responsible use of AI within our volunteer team

  • Sharing our AI use guidelines and practices to our partners, participants and the wider public, especially during our events

This document is a draft and will be updated. If you have feedback on this document, please reach out to Thibaut (thibaut@thetransmutationprinciple.com). 

Thanks!

2. Our guiding principles for AI Utilisation

Our AI policy is guided by the following principles, aligning with our core values and Singapore NAIS (National AI Strategy).

2.1. Transparency & Accountability

  • Disclosure of AI use for official documents, assets and processes: Clearly inform users and stakeholders when they are interacting with documents and assets that have been developed with AI use

  • Data provenance: Be transparent about the AI sources - if applicable - used to produce documents, assets and processes.

  • Clear responsibilities: Establish clear roles, responsibilities, and accountability mechanisms within AlterCOP for the use and monitoring of use of AI systems. 

  • Compliance: Adhere to all relevant data protection laws (e.g., Singapore's PDPA) and other applicable regulations pertaining to AI.
     

Please note that this policy is currently worded to apply to Singapore laws only; each AlterCOP chapter should review and adapt the AI policy as needed to be aligned with their local regulatory framework.

  • Resilience: Use AI systems that are robust and resilient to errors, adversarial attacks, and unexpected inputs.

  • Data security: Implement stringent data security measures to protect the integrity and confidentiality of data used by and generated from AI systems. This is aligned with our data protection policy.

2.2. AI Frugality

  • AI only when necessary or relevant: Even though the definition of necessary & relevant is subjective, we encourage our team members to use AI only when its value is proven on;

    • Time saved on delivering the assigned task

    • AI insights being particularly helpful in curating content, documents or any kind of information

    • Creating a process, document, content from scratch only when a template does not exist yet in AlterCOP library

  • Recommended scopes of AI utilisation: See section 4, where we list recommended areas & tasks where AI is recommended, heavily restricted or prohibited.

2.3. Fairness & non-discrimination:

  • Inclusivity: Use, as much as possible, AI systems that are inclusive and consider the diverse needs and perspectives of all users and affected communities, especially in their algorithm design.

  • Utilisation of limited bias AI only: We prohibit any type of AI/LLM using propaganda algorithms or biases that are not aligned with our values.

2.4. Human-centric Governance

  • Human Oversight of AI use: AlterCOP commits to maintain appropriate human oversight and control over AI systems, ensuring that the team ultimately remains accountable for decisions and actions influenced by AI.

  • Equitable Access: AlterCOP strives to ensure that the benefits of AI technologies are accessible to all AlterCOP team members, without discrimination based on socio-economic status, geography, or other protected characteristics.

3. Implementation and operationalisation

To translate these principles into practice, AlterCOP will implement the following operational guidelines:

3.1. Governance, roles and responsibilities for the use and monitoring of use of AI systems
The Singapore core team is responsible for;

  • The development and revision of AI use policy for AlterCOP 

  • The application of AI use policy for AlterCOP Singapore

  • The dissemination of AI use policy across the Singapore volunteers and international chapter core teams

The international chapter core teams are responsible for assessing, enforcing and revising the AI policy for use in their country’s jurisdiction when applicable.

Volunteers are expected to adopt these guidelines and self-report AI use as stated in this policy.
 

3.2. Disclosure of AI use for official documents and assets by AlterCOP team 

AlterCOP Core team will disclose AI utilisation in the footer of every of document produced;

  • “Designed by AI”: Document that has been entirely designed and produced by AI with no human involvement, with a mention of the LLM used.

  • “Co-written with AI”: Document that has been partly designed and produced with AI, with a mention of the LLM used.

  • “Proofread with AI”: Document that has been designed and produced without AI, though AI was used for validation of content, with a mention of the LLM used.

  • “AI-free”: Design and production of document did not utilise AI

 

Disclosure will also be made if AI was used for research and brainstorming purposes in the development of official documents and assets.

 

How to disclose AI use with APA

Format:

Designed by/Co-written with/Proofread with AI. Author. (Date). Name of tool (Version of tool) [Large language model]. URL

Example:

Co-written with AI. OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat

 

3.3. Promoting awareness across stakeholders

This document will act as the main source of information and AI practices. 

  • Dissemination of AI policy: This document will be disseminated to all our team members and contributors in Singapore and in international chapters where we have a dedicated core team.

  • Internal Education: AlterCOP Singapore core team provides guidelines, internal communication and resources to all personnel involved in AI development, deployment, or decision-making on the principles of this policy.

  • External Stakeholder Communication: AlterCOP will clearly communicate our AI policy and practices to all external relevant stakeholders; suppliers, sponsors, partners, participants, etc. This will be done via publication of AI policy on our website, notifying our external stakeholders via email or other appropriate communication. 

3.4. Monitoring and evaluation

  • Feedback Mechanisms: Users and stakeholders may provide feedback, raise concerns, or request redress regarding AI-driven decisions to co-founders of AlterCOP, The Matcha Initiative’s DPO Anne Langourieux (anne@thematchainitiative.com) or The Transmutation Principle’s DPO Thibaut Meurgue-Guyard (thibaut@thetransmutationprinciple.com

  • Continuous Improvement: This policy is a living document and will be regularly reviewed and updated to reflect advancements in AI technology, evolving best practices, and lessons learned from our own implementations

3.5. Accountability Mechanisms

To ensure accountability of AI use within AlterCOP, the following operational accountability mechanisms apply:

  • Internal Stakeholders

    • All volunteers under AlterCOP Singapore to sign this AI use policy 

    • international chapters’ core team members to sign this AI use policy

    • All AlterCOP Singapore volunteers to implement guidelines and disclose AI usage

  • External Stakeholders

    • AI use policy clause added in our speaker and trainer onboarding form to be validated by each speaker or trainer who presents at AlterCOP

    • Suppliers or vendors are to disclose AI usage in their scope of work with AlterCOP (For AV/Marketing/Communications agencies/suppliers). They shall provide a list of services they engage AI in, how they use it, for what purpose, for validation by AlterCOP.

  • External Communication & Communication Assets

    • Uploading of our AI policy on AlterCOP’s website

    • Mention of AI usage in external documentation when applicable (reports, etc.)

 

4. AI Utilisation guidelines for AlterCOP Operations

This section provides specific guidance on the appropriate use of AI tools within AlterCOP's event preparation, meetings, and other operational scopes.

4.1. Priority scopes for AI Utilisation

We encourage the responsible use of AI tools in the following areas to enhance efficiency, creativity, and impact, always with human oversight:

  • Content generation (drafting & ideation)

    • Marketing copy: Drafting social media posts, email newsletters, website content, and ad copy.

    • Preliminary research summaries: Generating summaries of academic papers, reports, or news articles on climate topics for internal briefing.

    • Brainstorming: Ideation for conference themes, session titles, and speaker suggestions.

  • Proofreading: we encourage the use of DeepL to make sure English Syntax is preserved 

  • Data Analysis & insights

    • Audience segmentation: Analysing attendee registration data to identify trends and preferences (anonymised data where applicable).

    • Feedback analysis: Summarising and categorising qualitative feedback from surveys or open-ended responses.

    • Trend identification: Spotting emerging themes in climate discussions or public sentiment from large datasets.

  • Productivity & efficiency

    • Meeting transcription & summarisation: Transcribing meeting audio and generating concise summaries (with clear consent of all participants).
      We will accept only one official transcript solution per meeting from July 1, 2025 - personal AI assistants may not be used in these meetings.
      For meetings and townhalls with Core Teams Members, Gemini, being part of our Google Business Suite, will be the AI transcript used. 

    • Initial drafts of standard documents: Creating first passes of internal reports, memos, or operational checklists.

  • Accessibility enhancements

    • Real-time captioning: Providing live captions for online meetings.

    • Language translation: Facilitating communication across diverse linguistic backgrounds. Each translation co-generated with AI will undergo human validation nevertheless.

4.2. Scopes where AI utilisation is heavily restricted

To safeguard AlterCOP's integrity, uphold ethical standards, and prevent potential harms, AI tools are strictly prohibited or heavily restricted in the following areas:

 

  • Handling of highly sensitive or personal data

    • AI tools will not be used to process or analyse highly sensitive or unanonymised personal data (e.g., health information, financial details including basic payment processing) unless specifically approved and with robust security and privacy safeguards.

  • Content generation (drafting & ideation)

    • Curating Programs: Generating initial structures for days flow or programs

    • Speech/Presentation outlines: Generating initial structures for speeches or presentations or masterclasses.

 

4.3. Scopes where AI utilisation is prohibited

  • Decision-Making on sensitive matters

    • Speaker selection: AI must not be used to make final decisions on speaker selection, especially where diversity, representativeness, or individual merit are critical. AI can assist with initial shortlisting based on objective criteria but human review and final decision-making are mandatory.

    • Grant or sponsorship approvals: AI will not be used to approve or deny grants, sponsorships, or partnership agreements. These require human judgment, relationship building, and nuanced ethical consideration.

    • Attendee vetting or exclusion: AI will not be used to automatically vet, exclude, or de-prioritise attendees based on any sensitive or potentially biased criteria.

  • Original artistic creation for public release (without attribution)

    • While AI can assist with ideation, any publicly released visual art, music, or other creative works generated or significantly influenced by AI must be appropriately attributed and reviewed by human artists/designers to ensure originality and ethical sourcing of training data. AlterCOP does not use AI for full image production.

  • Direct interaction with public without clear disclosure

    • Public-facing chatbots: Any AI chatbot interacting directly with the public (e.g., on the website for FAQs) must clearly identify itself as an AI and have robust human oversight and escalation pathways.

    • "Deepfake" or synthetic media: The creation of misleading or deceptive synthetic media (audio, video, images) of individuals is strictly prohibited. This includes using AI to mimic voices or appearances without explicit consent and clear disclosure for legitimate, non-misleading purposes.

  • Generation of original research & factual content without verification

    • Scientific statements: AI-generated content must never be presented as definitive scientific research or factual statements without thorough human review and verification against primary, credible sources. This is especially critical for climate-related information where accuracy is paramount. For all figures given, the source must be added to the document. 

    • Policy & Best practices recommendations: AI must not be used to automatically generate policy recommendations or positions for AlterCOP without significant human input, review, and alignment with our mission and values.

  • Legal & compliance advice

    • AI tools will not be used to generate legal advice, compliance guidance, or interpret complex regulations. These require qualified human legal expertise.

5. Breach of policy

Any suspected breach of this policy should be reported immediately to Thibaut Meurgue-Guyard, Co-founder & DPO of AlterCOP. Violations will be investigated promptly and appropriate action will be taken, which may include suspension of AI system use, remediation, or other disciplinary measures as relevant.

6. AI usage for the design of this policy

Please note this policy has been proofread with AI on the following sections:

  • 4. AI utilisation guidelines for AlterCOP Operations in order to confirm the scopes that need to prohibit AI for governance purposes, aligning with best practices globally

 

AI was not used to produce the rest of the document.

AI Policy

Adrien Humbert

Circonomy

Co-founder

Claire Kolly

EDLT.global

Director, HR Consulting & Executive Coaching

Our Trainers

bottom of page