Shaping AI for good

Building trustworthy and human-centric AI in a rapidly evolving regulatory landscape

Learn more about responsible AI governance, aligning ethical principles, global standards, the EU AI Act, and verifiable knowledge infrastructure.
Format

Online webinar

Total duration

~1 hour

Speakers

Tomaž
Levak

 

Laura
Bishop

 

Mark
Thirlwell

Price

Free

What is this course about?

Artificial intelligence is reshaping industries, business models, and decision-making. But as AI systems become more autonomous and embedded into enterprise operations, concerns around transparency, accountability, bias, misinformation, and regulatory compliance intensify.
To unlock AI’s full potential, organizations must move beyond experimentation and establish governance frameworks that ensure AI remains responsible, human-centric, and verifiable.
This webinar brings together experts from OriginTrail and the British Standards Institution (BSI) to explore how organizations can build a trustworthy and sustainable AI future by aligning:
  • Ethical principles and human-centric design
  • Global standards and assurance mechanisms
  • Emerging regulations (including the EU AI Act)
  • Enabling technologies such as the OriginTrail Decentralized Knowledge Graph (DKG)
Rather than treating AI governance as a compliance afterthought, this course presents it as a strategic capability, essential for long-term innovation and public trust.

Why does this matter?

AI presents transformative opportunities, but also introduces significant risks:
AI-generated misinformation and deepfakes
Intellectual property
infringements
Bias and lack of
transparency
Unverified data
sources
Regulatory
uncertainty
Loss of human
oversight
With regulatory initiatives such as the EU AI Act and other global frameworks emerging, organizations must proactively address governance and compliance challenges.
Trust is no longer optional, it is a competitive differentiator.
Building public trust in AI requires:
Clear
accountability
Robust
safeguards
Standards-aligned
governance
Verifiable knowledge
infrastructure
This course explores how organizations can combine governance frameworks with technologies such as the OriginTrail DKG to anchor AI systems in traceable, auditable, and provable knowledge.

What will you learn?

Through this webinar, you will learn:
  • Evaluate AI systems through a human-centric lens, understanding how psychological factors like trust, bias, and overreliance shape outcomes.
  • Navigate AI safety and governance frameworks, including standards and assurance approaches.
  • Understand the evolving global regulatory landscape, including the EU AI Act.
  • Identify practical barriers to ethical AI adoption — and strategies to overcome them
  • Reduce risks such as deepfakes, misinformation, and erosion of autonomy.
  • Understand what “verifiable AI” looks like in practice.
  • Explore how the OriginTrail DKG supports transparency, provenance, auditability, and compliance.
By the end of the course, you will have:
  • A clear mental model for trustworthy AI governance.
  • An understanding of how standards and regulations intersect.
  • Practical language to discuss responsible AI with legal, compliance, and executive stakeholders.
  • Insight into how verifiable knowledge infrastructure strengthens AI accountability.
  • Greater clarity on how to prepare your organization for regulatory and audit readiness.

Who is this course for?

This course is for you if you are:
  • An enterprise leader or executive decision-maker
  • A compliance, risk, legal, or governance professional
  • A member of an AI, data, or digital transformation team
  • A product owner building AI-enabled solutions
  • A standards or policy contributor
Not required:
  • Technical AI development experience
  • Blockchain knowledge
  • Prior OriginTrail experience

About the instructors

Tomaž Levak

Co-founder @OriginTrail

A pioneer in decentralized knowledge infrastructure, Tomaž leads the development of the OriginTrail Decentralized Knowledge Graph, enabling discoverable, verifiable knowledge for trusted AI systems.

Laura Bishop

AI and Cyber Security Sector Lead @British Standards Institution (BSI)

Laura specializes in human-centric AI, psychological dimensions of technology adoption, and the role of standards in ensuring safe and ethical AI systems.

Mark Thirlwell

Global Digital Director @British Standards Institution (BSI)

Mark works at the intersection of AI safety, governance, and global regulatory development, contributing to standards and policy shaping responsible AI deployment.
Created with