Blog

Ammar

Blog

“First do no harm”, is what one of the earliest documented ethical mandate, the hippocratic oath, teaches us.

Killing (or harming) a patient, such that they are worse off after being seen by a medical practitioner, was the guiding principle that helped managed experimentation in a then new and unknown world.

As we went from magical healing (that had some merits, but not well understood) by tribe elders or healers to a scientific approach supported through technological advancements, experimentation was the key to progress.

A hypothesis was made about something like blood letting for example, but it only worked some of the times but continuous experimentation proved risky and made things worse.

It is that experimental mindset that helped bring in advancements however it was apparent that the principles of treatment had to be managed hand in hand. So experimentation alone wasn’t enough t bring progress, guidelines were necessary.

As I embark on technology ethics at academyEx, it has become something that is weighing heavy on my mind. Technical advancements has been happening at an exponential pace and I do admit it is exciting, but are we “doing no harm”. To quote one of my many astute fellow students “it’s not about moving fast, it’s about moving forward.” Is this happening too fast or are we too slow at responding.

Rapid advancments in artificial intelligence, biometric surveillance, and algorithm-driven social media have made “doing no harm” far trickier than it looked five years ago.  Between 2020 and 2025 we have watched deep-fake election ads spread faster than fact-checkers can respond, record data-protection fines levied against Big Tech, and the world’s first comprehensive AI law pass in Europe.  Taken together, the lesson is clear: no single actor, government, industry, or academia can keep pace with technology’s ethical risks alone.  Only constant, structured collaboration can turn the abstract idea of tech ethics into day-to-day guard-rails.

Why “harm” keeps getting harder to pin down

Viral manipulation and misinformation

  • In India’s 2024 national elections AI-generated avatars and audio clips blurred the line between truth and fiction, prompting the country’s former chief election commissioner to warn they could “set the country on fire.”  Similar synthetic-media worries surfaced across Europe, where the EU’s Digital Services Act and forthcoming AI Act both single out deepfakes as “systemic risks.” 

The data-privacy wake-up call

  • Meta’s €1.2 billion GDPR penalty in 2023 remains the largest privacy fine to date and forced firms worldwide to re-audit cross-border data flows.  A few months later Meta also agreed to a $725 million settlement over the Cambridge Analytica scandal, reminding regulators that old harms can resurface in new business models. 

Algorithmic bias and everyday discrimination

  • Investigations into automated hiring tools showed qualified applicants being screened out by opaque filters, underscoring how small design choices can scale into structural inequity. 

Government: from back-foot to blueprint

Looking back to 2020, we see government intervention, not only after the fact but upfront to avoid the instance of “harm”.

  • April 2021 – EU draft AI rules limit police use of live facial recognition and threaten fines up to 6 % of global turnover. Why it matters : First signal that Brussels would treat certain AI applications as “unacceptable risk.”
  • Oct 2023 – US Executive Order on AI forces frontier model developers to share safety test results with Washington. Why it matters : Sets a transparency floor in the world’s largest AI market.
  • Nov 2023 – Bletchley Declaration & UK-US safety pact brings 28 countries together around shared testing standards. 
  • Mar 2024 – EU Parliament approves the AI Act, the first binding, risk-tiered AI framework. 
  • Apr 2025 – Online Safety Act codes finalised in the UK impose algorithm changes and personal liability for children’s safety. 
  • Apr 2025 – Ofcom opens first Online Safety Act investigation into a suicide-encouragement forum, signalling early enforcement.  Why it these final 4 points matter : Together these moves show regulators shifting from reactive fines to proactive design mandates. A trend that will only accelerate as the EU AI Act phases in from 2025.

Industry: cautiously swapping “move fast” for “test first”

Industry has been seen to adding rigour to their decisions when releasing a new model or new advancement, as a good sign that responsibility and pace can also be profitable.

  • Microsoft embedded OpenAI’s ChatGPT technology into “Copilot” only after months of joint red-team testing, betting that rigorous pre-release evaluation would out-compete rivals’ speed. 
  • Artists’ lawsuits against Stability AI and others over copyrighted training data helped persuade EU lawmakers to add strong transparency clauses to the AI Act. 
  • Big Tech’s own lobbying push also shows limits: Brussels is now investigating Microsoft’s Bing and Meta’s content-moderation under the Digital Services Act. 

Academia & Civil Society: building the evidence base

Peer-reviewed research is framing what “responsible AI” should mean:

  • A 2024 Nature review argues that only a globally coordinated authority can keep pace with cross-border AI risks and avoid regulatory fragmentation. 
  • Studies on trust, governance and healthcare AI published in Nature journals highlight recurring patterns, lack of transparency, unequal data, and unclear accountability. Feeding directly into WHO and OECD guideline work. 

Researchers also occupy seats at policy tables: Oxford’s Future of Humanity Institute advised the Bletchley Park summit, while academic fellows sit inside NIST’s AI Risk Management working groups in the US.

Convergence: why the triple helix matters

Bletchley Park 2023 exemplified the new model: diplomats (government), CEOs (industry) and scholars (academia) co-authored a single declaration on frontier-model testing. 

Under the UK-US deal, each country’s AI Safety Institute will share red team protocols and open evaluation sets, work that neither governments nor firms could credibly run alone. 

Looking ahead (2025 – 2030)

  • Continuous harm assessment : Regulators should require dynamic risk reviews, not one off compliance reports. Open audit sandboxes
  • Industry and academia can codevelop reference tests that agencies can adopt without recreating them.
  • Global minimum viable principles : The EU AI Act and US EO can seed an eventual treaty, but only if low and middle income nations have a voice.
  • Public interest red teams : Fund independent researchers to stress-test models, mirroring the security community’s role in cryptography.
  • Ethics by design education : Embed multidisciplinary ethics modules in computer science and Masters programmes to normalise cross sector thinking.

If the past five years have taught us anything, it is that ethical guardrails arrive most quickly and stick most firmly, when government sets an enforceable floor, industry treats safety as a competitive advantage, and academics keep both honest with evidence. Keeping that trisector conversation alive is now the central task of tech ethics.

Still not enough

As I conclude this investigative research, I would be remiss to not identify areas that I have not dug into yet. the existing inequities that exist globally with regards to technological advancements.

Technology can magnify existing inequities for groups that already sit at society’s margins.

  • First, the explosion of #ADHD, tagged videos on TikTok, more than 20 billion views has spread welcome awareness, but doctors told the BBC it is also driving a wave of inaccurate self-diagnosis that leaves many neurodivergent people without proper care or protection from exploitative “quick-fix” services 
  • Second, during Australia’s 2023 referendum on creating an Indigenous “Voice to Parliament,” researchers traced an “ecosystem of disinformation” to social media algorithms that amplified racist memes and manipulated videos; Aboriginal health advocates warned the resulting online hate was pushing First Nations suicide prevention hotlines to record demand  .

Together these cases show why ethical review must extend beyond technical safety to ask who is being targeted, who is being left out, and how algorithmic design choices can translate into very real harms for people with ADHD, Māori, Aboriginal and countless others whose voices are too easily drowned out.

I hope you enjoyed this research and if you have any insight or opinion on what needs to be done to manage the inequity gaps while still managing technological advancements ethically and at pace please reach out. It is something I am embarking on discovering as my “next piece of work”

Sources

  • BBC News. (2023, October 30). TikTok ADHD videos may be driving people to self-diagnose, say doctors. https://www.bbc.com/news/health-67229879
  • BBC News. (2023, October 17). TikTok and the Voice: The misinformation swirling around Australia’s referendum. https://www.bbc.com/news/world-australia-67101571
  • BBC News. (2023, May 22). Meta fined €1.2bn by EU over US data transfers. https://www.bbc.com/news/technology-65672179
  • BBC News. (2023, December 18). EU agrees landmark rules on artificial intelligence. https://www.bbc.com/news/technology-67657424
  • BBC News. (2023, November 1). AI safety: World leaders sign Bletchley Declaration. https://www.bbc.com/news/technology-67285943
  • BBC News. (2023, October 31). US issues landmark AI safety order. https://www.bbc.com/news/technology-67254696
  • BBC News. (2024, April 12). UK watchdog investigates social media site under Online Safety Act. https://www.bbc.com/news/technology-68769563
  • BBC News. (2023, December 4). Australia’s Meta and Google showdown over misinformation laws. https://www.bbc.com/news/world-australia-67681129
  • European Commission. (2024). Artificial Intelligence Act. https://artificial-intelligence-act.eu
  • European Commission. (2022). Digital Services Act (DSA). https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
  • White House. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
  • Bletchley Park Declaration. (2023). Bletchley Declaration on AI Safety. https://www.gov.uk/government/publications/ai-safety-summit-2023-bletchley-declaration
  • National Institute of Standards and Technology (NIST). (2023). AI Risk Management Framework (AI RMF 1.0). https://www.nist.gov/itl/ai-risk-management-framework
  • OECD. (2019). OECD Principles on Artificial Intelligence. https://oecd.ai/en/ai-principles
  • Leslie, D. (2023). Understanding the gap: A review of AI ethical principles and their implications. Nature Machine Intelligence, 5(4), 289–296. https://doi.org/10.1038/s42256-023-00626-5
  • Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review, 1(1). https://doi.org/10.1162/99608f92.8cd550d1

Ammar

Blog

The discussions around ethics and technology is prevalent and timely as we see the AI race moving faster than we can comprehend, let alone the social, ethical and moral considerations that are still being debated. I wonder the world’s debates and congressional hearings are moving as fast as the emerging technology.

Surely, this is not the first time humanity has to deal with this.

I could not help myself but think of dystopian movies and TV shows like; The Minority Report, See, The Postman & Waterworld (to name a few that come to mind). The writers basically depicted a futuristic view of what and how things could go wrong. a common theme is around the chaos and destruction by human kind on to itself as a result of lack of consideration of moral and ethical implications of the use of emerging technologies.

Technology, as we know it know looks like the latest AI bot or LLM or Agent in the news…..But allow me to take you on a journey back in time and reflect on the first unveiling of technologies, from Farming, Fire, the Wheel, metallurgy and weaponry. Technology after all is always and at some point an emerging technology and I like to think of it as a new tool, or service that allows humanity to perform the usual task but in a different way. Technologies that stick are the ones that really added value to the human race or at least perceived value…. The perception of value is what interests me.

Technology after all is always and at some point an emerging technology and I like to think of it as a new tool, or service that allows humanity to perform the usual task but in a different way.

Even “writing” was a form of technology that was not always looked upon favourably. This is evident in Plato’s dialogue Phaedrus. Plato recounts a myth where the Egyptian god Thoth (Theuth) invents writing and shows it to King Thamus. Thamus rejects it, saying writing will weaken memory and wisdom: “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters… You give your disciples not truth, but only the semblance of truth… they will appear omniscient and will generally know nothing.”.

“……writing will weaken memory and wisdom: “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters… You give your disciples not truth, but only the semblance of truth… they will appear omniscient and will generally know nothing.”.

Plato

Earliest Ethical Reflections on Technology – A Chronological Timeline

c. 45,000 BCE – Prehistoric Compassion (Shanidar Cave, Iraq): Archaeological evidence suggests Neanderthals practiced purposeful burials. In Shanidar Cave, one Neanderthal (Shanidar IV) was found with clumps of pollen around the grave, leading archaeologist Ralph Solecki to propose a “flower burial” – the idea that mourners placed flowers with the dead cam.ac.uk. Another Shanidar skeleton showed long-term care of a disabled individual cam.ac.uk. These findings imply early ethical sensibilities: care for the vulnerable and ritual respect for the deceased, long before written records.

c. 17,000–15,000 BCE – Cave Art and Ritual (Lascaux, France): Deep in the Paleolithic, humans created sophisticated cave paintings like those at Lascaux. Scholars theorize this wasn’t mere decoration; the prevailing view is that Lascaux’s images were part of spiritual or ethical rituals. Hunters possibly performed ceremonies in these painted caves to honor animals or ensure a successful, respectful hunt (“hunting magic”). The remote location and careful execution of the art suggest a sacred sanctum used for initiation or cooperative teaching. In essence, early humans appear to have reflected on their relationship to nature and prey – an ethical consideration – through art.

Replica of a Lascaux cave painting (c.17,000 BCE). Many researchers believe such Paleolithic art had ritual or educational purposes, indicating early spiritual/ethical engagement with hunting and nature.

c. 10,000–8000 BCE – Dawn of Agriculture (Fertile Crescent & Mythic Eden): The Agricultural Revolutionprofoundly altered human life with farming, settlements, and new social structures. Later traditions remembered this transition with ambivalence. For example, the Garden of Eden story (recorded in Genesis, likely based on earlier oral tradition) symbolizes humanity’s “fall” from an easy natural life to one of toil. After Adam and Eve gain forbidden knowledge, God condemns Adam to a life of farming: “cursed is the ground… in toil you shall eat of it… by the sweat of your face you shall eat bread”. This etiological myth frames agriculture as a double-edged innovation – necessary for civilization but seen as a punishment or loss of innocence. Ancient Mesopotamian folklore similarly contains contradictory origin stories about how farming began, indicating an early awareness that adopting agriculture was a momentous, not entirely positive, change. Humans reflected on the social upheavals (inequality, hard labor, property disputes) that farming brought, often casting these in moral or divine terms.

c. 4000–3000 BCE – The Potter’s Wheel and Creation (Egypt & Mesopotamia): Early technologies were often woven into creation myths, suggesting they carried ethical or sacred meaning. In ancient Egypt, artists of the Eleventh Dynasty (c.2130–1991 BCE) depicted the god Khnum forming human beings on a potter’s wheel. This is striking – a tool (the potter’s wheel, perfected in Egypt by 3000 BCE) is shown as the instrument of divine creation. It implies that craftsmanship and technology were seen as positive, godlike powers, used ethically by deities to shape life. Similarly, Mesopotamian myths credit gods with gifting civilization’s arts to humans. Sumerian legend tells how Enki and Inanna brought the Me (divine decrees of skills like writing, farming, weaving) to humanity – essentially, technology as a sacred trust. The ethical undertone is that using these innovations justly was part of the divinely ordained order.

c. 2200–1700 BCE – Earliest Laws on Tech Misuse (Mesopotamia): With civilization came written law addressing technology’s impact on society. The Code of Hammurabi (Babylon, c.1750 BCE) is one of the earliest legal codes and includes specific provisions to curb the misuse of technology. For instance, agriculture depended on irrigation canals and dikes; Law §53–54 stipulates that if someone neglects their dike and causes a flood that destroys a neighbor’s crops, they must compensate for the loss. In other words, there was an early legal/ethical principle of responsibility for technological negligence (in this case, maintaining infrastructure). Another law (§55) similarly fines a person who leaves an irrigation canal open and floods a neighbor’s field.

c. 1750 BCE – Accountability in Medicine (Babylonia): The same Hammurabi code contains one of the first known discussions of medical ethics. It holds surgeons to account for their use of advanced tools: Law §218 says if a doctor uses a bronze lancet to operate and the patient dies or loses an eye, the surgeon’s hand shall be cut off ehammurabi.org. This harsh punishment reflects an early ethic: with the new “technology” of invasive surgery comes personal accountability. In short, Babylonian society recognized that specialized knowledge (like surgery or engineering) conferred power that needed ethical regulation – foreshadowing the principle “do no harm.”

c. 1500 BCE (legendary time, text c.300 BCE) – Forbidden Knowledge in the Book of Enoch: An ancient Hebrew apocryphal text, the Book of Enoch, offers a mythic critique of certain technologies. It describes events before the Flood when fallen angels (the Watchers) taught humans various arts. One angel, Azazel, “taught men to make swords, knives, shields, and breastplates, and made known to them the metals of the earth and the art of working them…,” after which “there arose much godlessness… and [humans] became corrupt in all their ways”. This story implies a deep-seated ethical concern: the introduction of metallurgy and weapons is linked with moral decay and violence. Early Jewish tradition thus portrayed advanced technology (especially weaponry) as knowledge that perhaps humans were not ready for, aligning it with the origin of sin and chaos. It’s a cautionary tale that too-rapid progress, or acquiring powerful tech from dubious sources, can lead to societal ruin – an idea that echoes down to later myths of “forbidden knowledge.”

c. 1100–700 BCE – Myths of Hubris and Punishment (Near East & Greece): Several early myths warn against human overreach through technology. In Mesopotamian tradition, the Tower of Babel story (Genesis 11, probably composed ~6th century BCE) castigates human arrogance in building a gigantic tower. The people of Babel innovate by baking bricks and using bitumen mortar – cutting-edge construction for the time – to build a city and tower “with its top in the sky.” God confounds their language and scatters them, suggesting a divine check on human ambition. Some later commentators (e.g. medieval Rabbi Abarbanel) even interpreted the detail about brick-making as symbolic: technology breeds new problems, and while not forbidden, it leads humanity away from an ideal simpler state jewishideas.org.

In Greek mythology, Prometheus and Pandora form a paired lesson. The Titan Prometheus defies the gods to steal fireand give it to mankind – fire being a metaphor for all technology and knowledge. In Hesiod’s account (~700 BCE) and Aeschylus’ play Prometheus Bound (~460 BCE), this gift enables progress (cooking, metalwork, engineering, etc.), but it comes at a price. Zeus punishes Prometheus severely for empowering humans. Moreover, Zeus sends Pandora – the first woman – with a jar (or “box”) of evils that she unwittingly releases into the world, a counterweight to Prometheus’ gift of fire. The ethical message is twofold: on one hand, technology (fire) was a heroic boon that allowed civilization, even seen as a compassionate act by Prometheus (“I gave humans hope… and taught them the secrets of fire” he says). On the other hand, the gods’ retaliation and Pandora’s tale warn that new powers can unleash unintended consequences (suffering, toil, illness) if humans lack divine favor or wisdom. This mythic theme – that every great innovation carries risk – is one of the earliest expressions of technological ethics. Greek storytellers were essentially asking: Do our advances make us better, or will they backfire? The mixed fate of Prometheus and Pandora embodies that uncertainty.

c. 800–500 BCE – Iron Age Warfare and Moral Decline (Eastern Mediterranean): By the first millennium BCE, iron tools and weapons had spread, and people reflected on how metallurgy altered society. Hesiod’s Works and Days(circa 700 BCE) describes the prior Bronze Age race of men who made everything from bronze – their weapons, houses, tools – and “loved the lamentable works of Ares (war)”. He says this bronze race ultimately destroyed itself in wars and descended to Hades, leaving no name. Although framed in mythic terms, Hesiod’s account reveals a Greek view that an era defined by a *new metal technology (bronze) was marked by violence and moral decline. In the Five Ages of Man, the Bronze Age is inferior to the idyllic Gold and Silver ages that came before – a clear ethical judgment on the corrupting influence of advanced weaponry. This pessimism may well echo real memories of how bronze (and later iron) armor and swords enabled large-scale warfare. It’s an early philosophy of history: technological might (here, metallurgy) without corresponding virtue leads to self-destruction.

5th century BCE – Chinese Debate on War Tech (Mozi, China): Across the world, ancient Chinese thinkers were also grappling with the ethics of technology, especially in war. Mozi (Mo-tzu), a Chinese philosopher around the 5th century BCE, founded a school of thought that explicitly opposed offensive warfare. In the Mozi texts, he denounces the use of advanced weapons and military campaigns as morally unjust and wasteful. While Mozi was skilled in defensive military technology (Mohists were reputed for designing fortifications and anti-siege devices), he argued that offensive military innovation was ethically wrong, as it brought widespread harm. This might be one of the earliest philosophical condemnations of a specific technology on ethical grounds. Mozi’s stance highlights that by this time, people recognized how technologies (in this case, weapons and strategies of war) could be evaluated not just for efficacy but for their alignment with moral good or harm to society.

5th–4th century BCE – Technology and Justice (Greece): Greek philosophy explicitly addressed the need for ethics to keep pace with technical progress. In Plato’s dialogue Protagoras (c. 4th century BCE), there’s a myth that after Prometheus gave humans fire and crafts, humans still lacked the ability to live together safely. Zeus then sent the god Hermes to distribute Justice and Moral Sense to all people, “so that there should be order in cities”. To Protagoras, this story explained that technical skills (fire, farming, building, etc.) were not enough – ethical and political wisdomhad to be universal, or humanity would perish. This is an early articulation of a key principle: for society to benefit from technology, virtues like justice and respect must be widely shared “technology” in their own right. Everyone must partake in ethical reasoning just as they partake in using tools.

c. 370 BCE – The Critique of Writing (Plato’s Phaedrus, Greece): One of the most striking early evaluations of a new technology is Plato’s critique of writing. Writing began in Mesopotamia around 3200 BCE, but by Plato’s time it was becoming integral to Greek life, and not everyone thought this was good. In his dialogue Phaedrus, Plato recounts a myth where the Egyptian god Thoth (Theuth) invents writing and shows it to King Thamus. Thamus rejects it, saying writing will weaken memory and wisdom: “This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters… You give your disciples not truth, but only the semblance of truth… they will appear omniscient and will generally know nothing.”. Through this story, Plato voices an ethical concern that resonates even today: does new information technology harm our cognition and judgment? To him, books might give a false appearance of knowledge without true understanding. This early skepticism shows humans reflecting on a communication technology’s impact on the mind and social discourse. (Notably, Socrates in the dialogue advocates the superiority of living, dialectical knowledge – implying that technology should serve, not replace, genuine human wisdom.)

c. 4th–3rd century BCE – Daoist Rejection of Complexity (China): In ancient China, the Daoist classic Tao Te Ching (Dao De Jing) presented an ideal of simple living, implicitly criticizing overreliance on technology. Chapter 80of this text imagines a small contented community that “makes use of the knotted rope (old mnemonic cords) rather than writing”. It goes on to describe people who are satisfied with simple food, clothes, and home, and who do not travel afar in boats or carriages. This is essentially a plea to “return” to a low-tech society. By favoring knotted cords over writing, the text suggests advanced innovations (like writing, record-keeping, transport) breed unnecessary complexity and discontent. The ethical vision of Daoism here is that technological simplicity better preserves harmony, humility, and closeness to the Dao (the way of nature). While not condemning any single tool, Laozi advocates deliberately not using many new inventions to avoid the moral pitfalls (ambition, greed, strife) that come with them.

3rd century BCE – Reflections of an Emperor (Ashoka, India): (While slightly later than “early civilisation,” it’s worth noting) the Indian Mauryan emperor Ashoka (r. 268–232 BCE) for his unique ethical stance on warfare after using war technology to conquer. Following the bloody Kalinga War, Ashoka issued Edicts in stone proclaiming his remorse and renunciation of violence cheggindia.com. He embraced Buddhist non-violence (ahimsa) and urged moral conquest through virtue instead of weapons. Ashoka’s edicts aren’t about a specific invention, but they represent an early ruler’s public ethical response to the “technology” of imperial warfare. In a sense, he tried to govern by ethics as a “technique” to replace the sword. This stands as an early historical example of attempting to mitigate the human suffering caused by military technologies through a change in values and policy.


In summary, from prehistory through the early first millennium BCE, humans consistently framed new tools and innovations in ethical terms. Archaeological sites show compassionate or ritual behavior around technologies like fire, hunting, and burial. Mythologies worldwide – from the fire theft myths (Prometheus in Greece, Maui in Polynesia, etc.) to the Garden of Eden, from the Tower of Babel to Pandora’s box – reveal a deep concern with how newfound powers might disrupt the natural or divine order. Early laws and philosophical texts likewise grappled with maintaining justice, responsibility, and balance in the face of technological change. Whether it was controlling floods from irrigation, punishing medical malpractice, questioning the value of writing, or reining in the use of weapons, our ancestors left a rich record of ethical reflection. These examples show that the question “should we do this, and what might happen if we do?” has accompanied each major leap in technology since the very beginning of human history.

I wonder if these dystopian movies are on to something we do not want to face. I will continue enjoying them, but will be looking at them with a different lens of curiosity. Lessons can be learnt from the past and the philosophy of ages and even the myth’s we all love and know.

Timeless Ethical Advice for Emerging Technologies

Wisdom from the Ancients

  1. Every innovation comes with a consequence—seek balance, not just progress.
    From fire to the wheel, from writing to warfare, early cultures recognised that tools empower but also endanger. Ethical foresight must match technical foresight.
  2. The more powerful the tool, the greater the responsibility of its use.
    Babylonian laws, early surgery, and myths like Prometheus all teach that harm from misuse grows with the power of the innovation. Responsibility must be designed into every stage—from invention to implementation.
  3. Don’t confuse knowledge with wisdom.
    Plato warned that writing could produce the appearance of knowledge without understanding. Today, AI and big data demand that we prioritise critical thinking, not just information accumulation.
  4. Technology should serve humanity—not replace our humanity.
    Ancient stories about forbidden knowledge, like the Book of Enoch or Pandora’s box, remind us that technological advances must support human dignity, justice, and well-being—not undermine them.
  5. Anticipate unintended consequences—and include diverse voices to foresee them.
    Ethical insight in early civilisations often came from myth, law, or collective memory. Future innovation needs broad consultation—not just engineers, but philosophers, historians, elders, and ethicists.
  6. Embed justice and compassion into every system.
    Whether irrigating crops, designing cities, or creating code, ensure that the most vulnerable are protected and that equity is part of the architecture—not an afterthought.
  7. Choose humility over hubris.
    The Tower of Babel and Iron Age epics warned against arrogance in the face of new power. Modern technologists must approach innovation with reverence for its potential, not a rush toward domination.
  8. Sometimes the wisest choice is restraint.
    Daoist and Buddhist traditions suggest that not using a technology can be the ethical path. Just because we candoesn’t mean we should.

Sources:

  • Cambridge University – Shanidar Cave “flower burial” and Neanderthal compassion cam.ac.ukcam.ac.uk
  • Lascaux Cave paintings – spiritual/ritual interpretations
  • Reddit (AskHistorians) – Garden of Eden as agriculture origin myth; punishments for Adam (toil in farming)Mesopotamian agriculture etiologies
  • Lapham’s Quarterly – Egyptian god Khnum using the potter’s wheel to create humans
  • Code of Hammurabi translations – Irrigation negligence lawssurgeon’s malpractice law (§218) ehammurabi.org
  • 1 Enoch (R.H. Charles trans.) – Azazel teaching weapon-making, leading to human corruption
  • Plato, Protagoras – Zeus sending Hermes with justice to humans (techne vs. ethics)
  • Hesiod, Works and Days – Bronze Age men obsessed with war and destroyed by it
  • Mozi (Stanford Encyclopedia / IEP) – Mozi’s condemnation of offensive warfare
  • Plato, Phaedrus – critique of writing weakening memory – “semblance of truth”
  • Tao Te Ching, Chapter 80 – ideal of knotted cords instead of writing; simple life without modern tech
  • (Ashoka’s Edicts – Rock Edict 13 summaries of Kalinga War remorse cheggindia.com)
  • Prometheus Bound (as cited in Lapham’s Quarterly) – Prometheus: “I taught them the secrets of fire…” and discussion of crafts.

Ammar

Blog

Six weeks ago, I joined academyEx with a single intention: to re-invigorate my curiosity and love for learning. I had just left a corporate role feeling burnt out and disoriented. My executive functioning—planning, prioritization, self-management—had taken a backseat. I needed space, not just to breathe, but to rebuild. That’s what led me here.

And wow, it’s been a ride.

From day one, I felt a sense of pride just being accepted. Meeting the team during induction was electric. It was clear that this wasn’t traditional education. It was human-centered, adult-respecting, and deeply intentional. I felt seen.

Then came the first assignment.

Suddenly, I was overwhelmed. The excitement gave way to anxiety. I’d underestimated how much I relied on old systems of self-management that no longer served me. Topics like Global Geopolitics, IoT, Robotics, and Critical Thinking had me deep-diving like never before—but without a solid system, I was swimming in too many directions at once.

The inner critic got loud. “Maybe I’m a bad student.” That was the voice until… I received feedback. It was kind. Informative. A lifeline. It gave me direction and confidence to keep going.

I won’t lie—I wish I could go back and coach my past self. But I also know: it’s the messy middle that made me stronger.

Now, six weeks in, I feel great. I’m just at the beginning, but I already see how much I’ve grown. And I know I made the right call choosing academyEx. Their support structure—especially the integration of industry mentors—was exactly what I needed to keep going.

So, to anyone considering this journey, here’s my advice:

  1. Trust the process. Do the weekly readings. They’re more valuable than they first seem.
  2. Speak up. Use Slack. Talk to lecturers. Ask for help out loud. Watch how your vulnerability builds community.
  3. Plan your info intake. Build a note-taking system from day one. I highly recommend Forever Notes.
  4. Use AI tools. Jennie, Storm, ChatGPT, NotebookLM—play with them. They’ll be your sidekicks for staying ahead.
  5. Reflect often. Daily, weekly. Keep a journal. Share it out loud once in a while. You never know who it might help.

This blog? Written through a heart-to-heart conversation with ChatGPT. It’s part reflection, part advice, and all truth. Here’s to the next six weeks.

Nicole Arthur

Blog


In the ever-evolving landscape of agile methodologies, staying updated with the latest trends and insights is crucial for professionals in New Zealand and around the world. In August we had the privilege of hosting a two-day event with the renowned Dave Snowden. This event proved to be a full-on, informative, and transformative experience, offering attendees the opportunity to meet new people, learn new things, and challenge their biases. In this blog post, we’ll delve into the key takeaways from this event that left participants inspired and equipped with fresh perspectives.

  1. Guard Against Explicitness:
    In the world of agile, it’s vital to avoid overly explicit processes or rules that can be easily manipulated. Make it challenging to trace input to output, fostering a more robust and adaptive system.
  2. Embrace Leadership Diversity:
    To stimulate a wealth of fresh ideas, it’s imperative that leaders exhibit diverse perspectives. Encourage leaders to be different, as this diversity of thought enhances problem-solving and innovation.
  3. Break Down Disagreements:
    When disagreements arise, don’t shy away from them. Instead, break complex issues down into smaller, more manageable components until consensus is reached. Conflict can be a catalyst for positive change.
  4. Navigate Complexity with Caution:
    Scrum is an excellent framework for transitioning from a complex to a complicated system. In complex systems, relying on past experiences to predict the future is often unreliable. Embrace adaptability and real-time assessments.
  5. Prioritize Attitudes over Compliance:
    Attitudes serve as leading indicators of an agile team’s health and performance. Focus on fostering positive attitudes and a shared vision. Compliance, on the other hand, should be a lagging indicator, following a strong foundation of attitudes and alignment.


The reviving agile event with Dave Snowden was undoubtedly a remarkable two days filled with insightful discussions, networking opportunities, and the chance to challenge our biases. The key takeaways highlighted here were just a small subset of the notes that I took from the event that I could immediately apply. By embracing complexity, diversity, sense-making, effective communication, and a commitment to continuous learning, agile teams in New Zealand and beyond can revitalize their approach and navigate the ever-changing landscape with confidence.

Nicole Arthur

Blog

Our minds are intricate webs of thoughts and perceptions, constantly processing information and making decisions. However, lurking within the labyrinth of our cognition are our cognitive biases—systematic patterns of deviation from norm or rationality in judgment, shortcuts our brain has employed based on past experiences to belong in society.

Lets explore 15 cognitive biases that profoundly shape the way we think and behave. By delving into these biases and reflecting on their presence in our own lives, we can empower ourselves to make more informed decisions and navigate the complex world of human thought.

Reflection and Self-Discovery: As you read through these cognitive biases, take a moment to reflect on your own life. Have you ever fallen victim to confirmation bias when seeking information? Or perhaps you’ve noticed the anchoring effect influencing your judgments about others. By gaining awareness of these biases and acknowledging their presence in your life, you can begin to untangle the web of cognitive patterns that shape your decisions.

Understanding cognitive biases is not just an exercise in self-awareness; it’s a powerful tool for making more informed choices in our daily lives. By recognizing when these biases are at play, we can mitigate their influence and strive for more rational, balanced thinking. So, empower yourself to navigate the complex world of human thought with greater clarity and wisdom.

Nicole Arthur

Blog


In a world filled with chaos and uncertainties, the adage “keep calm and carry on” has become more than just a catchy slogan. It’s a mantra for maintaining composure in the face of adversity. However, have you ever considered that this simple phrase holds a deeper meaning, especially in the context of psychological safety? In this blog post, we will explore the profound connection between staying calm and ensuring psychological safety, offering insights and examples that shed new light on this age-old wisdom.

1. The Power of Calmness:

At the core of creating a safe and supportive environment, whether in personal or professional settings, lies the ability to stay calm. It’s the first step toward building trust, fostering open communication, and nurturing psychological safety.

  • Staying Calm Personally: When you remain composed, even in challenging situations, you set an example for those around you. Your calm demeanor can be contagious, reassuring others that they are in a safe space. Example: Imagine you’re leading a high-stress project meeting, and tensions are running high. By staying calm and collected, you model emotional resilience for your team, encouraging them to express their concerns without fear of judgment.
  • Helping Others Stay Calm: As a facilitator and trainer, you have the unique opportunity to guide others in maintaining their composure. This is especially important when addressing sensitive topics or during conflict resolution. Example: During a diversity and inclusion training session, a participant becomes emotional while discussing their experiences with discrimination. Your ability to stay calm and empathetic allows the individual to feel supported, fostering an atmosphere of psychological safety and acceptance for everyone.
  • Techniques: To support with staying calm, you can employ various techniques such as stopping and pausing to collect your thoughts, grounding yourself in the present moment to stay focused, practicing mindful breathing for relaxation, and consciously choosing a positive and curious mindset to manage stress effectively.

2. Psychological Safety:

Psychological safety is the belief that one can express their thoughts, ideas, and concerns without fear of negative consequences. It’s the foundation for effective communication, innovation, and collaboration within teams and organizations.

  • The Role of Calmness in Psychological Safety: Staying calm is the linchpin of psychological safety. When individuals feel that those around them are composed and nonjudgmental, they are more likely to share their authentic thoughts and feelings. Example: In a workplace where leaders react explosively to mistakes, employees may hesitate to report errors. Conversely, in a calm and supportive environment, employees are more likely to admit their mistakes, leading to a culture of continuous improvement.

3. “Keep Calm and Carry On” Revisited:

The famous “keep calm and carry on” posters, originally designed during World War II, take on new significance when viewed through the lens of psychological safety. They remind us that maintaining our composure is not just about personal resilience; it’s about creating an atmosphere where others can thrive.

  • Application in Training and Facilitation: As an experienced facilitator and trainer, you can incorporate this insight into your sessions. Encourage participants to embrace the idea that staying calm is not just an individual trait but a collective responsibility. Example: In a conflict resolution workshop, discuss the importance of staying calm and composed when addressing conflicts within teams. Use the poster as a visual reminder of this principle.


The first step toward ensuring safety, both psychological and emotional, is the ability to stay calm. It’s a simple yet profound concept that can transform the way we interact with others, both personally and professionally. You have the power to cultivate this environment of calmness and psychological safety, fostering growth, innovation, and resilience within your teams. So, remember, when in doubt, “keep calm and carry on” – not just for yourself, but for the well-being of those around you.

Nicole Arthur

Blog


Agile practices have transformed the way products are developed and managed, with the focus changing to collaboration, adaptability, and customer-centricity. In this blog post, we’ll explore key Agile practices tailored to the role of a Product Owner, providing a description and example for each concept.

1. MVP (Minimum Viable Product):
Description: MVP is the core version of a product that contains the essential features needed to solve a specific problem for early users. The value of MVP is in getting something in the hands of your customer for feedback as soon as possible. It is also built with the minimum effort and includes only essentials features.

Advantages are early feedback and learning, and faster time to market and risk reduction.

Example: your are getting married, instead of the baker making the whole cake in different flavours, they will make cupcakes of each flavour for you to try and provide your choices on.

2. User Stories:
Description: User stories are brief (2-3 sentences) , non-technical descriptions of a feature from an end user’s perspective. The generally follow the template: As a <user> I want <what> so that <why>. A User story is not complete in the sense that it is a promise to have a conversation later to work out the finer details between the user/PO and the developers.

Advantages: Stimulate discussion on the story, including reasons for requirements and features. Reduces the documentation effort.

Example: “As a mother, I want to be able to order an Uber for my kids s so that I know they will get home safe when I can’t pick them up.”

3. Epics:
Description: Epics are large pieces of work that are then broken down into smaller, manageable user stories.

Example: “Develop log on Screen” could be an epic, broken down into user stories like “As an apple user I want to sign on using my apple credentials so that I don’t have more usernames and passwords to remember. ” and “As a regular user, I want the option to request a my password be reset when I have forgotten it .”

4. User Stories vs. Tasks:
Description: User stories represent features from the user’s viewpoint, while tasks are specific actions required to complete a user story. If your tool hierarchy puts stories and tasks at the same level – User stories can be what delivers value to your customer and tasks are what is required to be done by the team in order to enable the completion of user stories (internal value to the team, no visible value to the customer) Stories are usually sliced vertically and tasks horizontally.

Example: User Story – “View Project Timeline,” Task – “Design UI for Project Timeline.”

or User story “Facilitate workshop on product ownership”, Task “create miro board for product ownership workshop”

5. Story Mapping:
Description: Story mapping visually represents user stories and their relationships, aiding in prioritisation and release planning. It create a visual of the complete user journey and how the product can be developed and built upon to achieve each step in the journey. It also helps to identify what is the minimum that can be developed in order for the user to achieve their outcome. Story mapping is done with all key stakeholders and contains two important anatomical features.

  • The Backgone of the application is the list of activities the product supports. (user journey)
  • The waking skeleton is the product we build that supports the least number of necessary features. (what the product will do to support the users journey at each step)

Advantages: Useful for different aspects, initial product backlog filling, road mapping, transparency. Enables to understand various users journeys using the product

Example: Mapping out features for an e-commerce app, starting with the homepage and branching into categories, product pages, and checkout.

6. Definition of Ready:
Description: Criteria that a user story must meet before it’s considered ready for development. It is an explicit agreement between team members and the Product owner about when a work can be considered to be brought in to be worked on. Definition of Ready can differ from team to team. Definition of Ready is usually completed as part of backlog refinement or analysis.

Advantages: Transparency, reduction of misunderstanding and conflict, controlled internal quality, PO accountability and increased team efficiency.

Example: Clear acceptance criteria, user personas defined, and wireframes provided, small enough to be able to be worked on in the iteration, estimated and prioritised, Item clearly understood by the team.

7. Definition of Done:
Description: Criteria that a user story must meet to be considered completed. It is an explicit agreement between team members and product owner about when a work items is completed. Can also differ from team to team.

Advantages: Transparency, Clear and common understanding of what is needed for an item to be considered done. Controlled quality.

Example: Code written, reviewed, and tested; UI/UX approved; documentation updated, outcome meets any necessary standards or conventions or laws as required.

8. Bottom-Up Release Planning:
Description: Starting with individual user stories and gradually building up to define releases. – Allows the team to create a forecast based. on their velocity and number of sprints as to when they might complete certain value drops. Best to show the minimum, average and maximum velocity lines on. the chart.

Great image and article here from Project-management.info

Example: Prioritize and select high-priority user stories for the next release based on their complexity and impact.

9. Product Vision Board:
Description: A visual representation of the product’s vision, goals, and target audience. It allows the team to capture, describe and visualise and validate the product vision and strategy.

Advantages: initiates innovation and product discovery, transparency, customer orientation and resource management.

Romain Pichler has a great canvas here

10. Business Model Canvas:
Description: A strategic tool for developing and documenting a business model. A good approach to sketch out new business ideas in an iterative approach.

Advantages: Transparency and traceability of relationships, customer orientation, documentation and competence focus.

Example: Using the canvas to outline key elements like customer segments, value propositions, revenue streams, and cost structure. – Template available here at www.strategyzer.com

11. Personas:
Description: Fictional characters representing different user types and their needs. Describes a representative of a cluster of users concerning their needs, characteristics and behaviour. Its a fictional person with name, personal background, personality, goals, skills, painpoints, challenges, and wants. Personas should be placed so they are visible to. the team and used within user stories. They help the team keep the users needs and painpoints in mind.

Advantages: improves understanding of target group. avoids feature over engineering and improves the product quality.

Example: “Tech-Savvy Sarah” – a young professional seeking advanced features in your software.

12. Backlog Management:
Description: Continuous refinement and prioritization of the product backlog. Choose a prioritisation method that ensures the team are working on the highest value items for their customer and stakeholders. The Backlog should be the ever evolving, living list of everything that is required for the product. It needs to also align with the product strategy.

Example: Reviewing and updating the backlog during regular sprint reviews, actively using during sprint planning and the backlog refinement sessions.

13. Estimation (Story Points or Planning Poker):
Description: Assigning relative sizes to user stories to estimate effort and complexity to complete the work. Key areas to consider are amount of work, complexity of work, any risks or uncertainty. Story points are relative, not absolute. Everything needed to take a story to done is estimated. Story points should not be related to hours.

Planning poker activity link here from Mountain Goat Software. Planning poker is performed in either planning or Refinement. It enables a consensus orientated estimation of work. The Dev Team estimates size of upcoming items. Each member uses a fibonacci (or Similar) numbered card set. Privately they select a card representing their estimate and once all cards are placed face down, the team turn them over. High. and low estimators explain the reason for their estimates. After discussion, next estimation round is run until there is convergence of estimates.

Advantage : agree on estimation across the team, easier and faster estimation, manages risk. Planning Poker is fun.

Example: Using planning poker, the team estimates that a user story is a 5-point task.

14. Data-Driven Metrics (Burn Up/Down Charts, Velocity):
Description: Using data to track progress and predict future performance.

Burn down / Burn up – graphical representation of current state. Used vs remaining work over time. Can provide a warning that the sprint goal will not be achieved or identify where work in being added to the team without being prioritised first.

Velocity = #of story points the team completes during a sprint. You cannot compare velocity between teams and each team is different. Velocity is best used for forecast planning and also during planning to challenge the amount of work the team is committing to for the increment.

Example: Tracking sprint burn down to visualize how much work is remaining.

15. 3 Amigos:
Description: A collaboration between the Product Owner, Developer, and Tester to ensure shared understanding of user stories.

Example: In a grooming session, the three roles discuss a user story to clarify its requirements.

16. Role Responsibilities to the Wider Organization:
Description: The product owner has a responsibility to engage in collaborative activities in the wider organisation. These broader Agile practices are focussed around creating transparency and highlighting risks and dependencies and impediments. It is a way of scaling Agile practices across teams and improving planning and alignment.

Examples: Scrum of Scrums – Regular meetings to coordinate between multiple Scrum teams; Increment Planning – Collaboratively planning increments to ensure alignment with the larger product strategy; OKRs – Defining Objectives and Key Results to measure progress toward product goals.

Conclusion:
In the dynamic world of Agile product development, these practices serve as a robust framework for Product Owners to guide their teams in building valuable products. By incorporating these concepts into your teams, you’ll empower them to work cohesively, respond to change effectively, and deliver customer-centric solutions.

Nicole Arthur

Blog

Nicole Arthur

Blog

Continuing on from creating connection in self mastery. We are now moving onto influence. As mentioned on Franklin Coveys website “Focus your energy and attention where it counts, on the things over which you have influence. As you focus on things within your Circle of Influence, it will expand. “

The first activity we did was to identify our influencing style. The link to the quiz is here.

Then to map out our circle of concern and circle of influence.

Then a reflection activity to reflect on how much you focus on things out of your control and influence and how you might let them go.

Before tying it back to our stakeholders and their concerns and influence and how we can help with that. This allows us to gain clarity and awareness around their concerns and also to build connection.

By bringing awareness, creating visibility, creating connection.   It works with the work in teams and it works with ourselves.  Become aware, pick something small to work on, make some changes, see what happened and start again. 

Nicole Arthur

Blog

Continuing on from motivations and culture in self mastery. Lets dive into how we can create better connection.

Marshall Rosenberg is credited with the creation of non violent communication method. You can see his videos here. As well as his book Nonviolent Communication: A Language of Life: Life-Changing Tools for Healthy Relationships (Nonviolent Communication Guides).

Non Violent communication is a way of communicating that creates connection and a win win outcome for all parties to get their needs met.

Our first steps were to select the feeling words that resonated with each of us.

and do the same for the needs words from the universal needs list.

From here we completed the pause and listen activity durning the day.

By bringing awareness, creating visibility, creating connection.   It works with the work in teams and it works with ourselves.  Become aware, pick something small to work on, make some changes, see what happened and start again.