About Liam K. Dhar
Liam K. Dhar is a UK-based independent analyst working at the intersection of artificial intelligence governance, digital infrastructure, and emerging technological power systems. His research and writing focus on how advanced computational technologies reshape societies, institutions, and individual autonomy in ways that often outpace public understanding and democratic oversight.
Rather than approaching technology as a neutral tool, Dhar treats digital systems as political and social architectures. His work examines how algorithms encode values, how platforms centralize power, and how technical design decisions quietly shape economic opportunity, civic freedom, and human agency. He is particularly interested in the gap between technological capability and ethical responsibility, a gap that continues to widen as AI systems scale globally.
A central theme in Dhar’s analysis is AI governance. He explores how artificial intelligence is regulated, who sets its rules, and whose interests those rules ultimately serve. His work critically evaluates regulatory models emerging from the United Kingdom, Europe, the United States, and Asia, highlighting both their ambitions and their blind spots. Rather than advocating for blanket restrictions or unchecked innovation, he emphasizes governance frameworks grounded in accountability, transparency, and public legitimacy.
Another core area of focus is zero-knowledge and privacy-preserving systems. Dhar writes extensively about cryptographic architectures, decentralised verification models, and privacy-by-design approaches that aim to reduce data extraction while preserving functionality. He views these systems not simply as technical solutions, but as potential counterweights to surveillance-based digital economies and state-level data accumulation. His analysis interrogates whether such technologies can realistically scale without being co-opted by the same power structures they seek to challenge.
Dhar’s work also addresses algorithmic harms, including bias, opacity, labor displacement, and the erosion of consent in automated decision-making. He is especially attentive to how these harms disproportionately affect populations with limited digital bargaining power, such as workers, minorities, and users in the Global South. Rather than treating algorithmic harm as an unintended side effect, he frames it as a structural outcome of incentive-driven system design.
Decentralisation is another recurring subject in his writing, though approached with caution rather than idealism. Dhar analyzes decentralised infrastructures not as inherently liberating, but as contested terrains shaped by governance choices, economic concentration, and technical complexity. He questions popular narratives that equate decentralisation with democracy, arguing instead that without strong social and institutional safeguards, decentralised systems can reproduce or even intensify inequality.
Across his work, Dhar maintains a human-centric perspective. He is concerned with how digital systems affect dignity, autonomy, and the capacity for individuals and communities to meaningfully participate in decisions that shape their lives. This perspective leads him to critique both corporate-led technology models driven by monetization and state-led models driven by control, while remaining skeptical of purely technocratic solutions.
As an independent analyst, Dhar operates outside corporate, governmental, and venture capital affiliations. This independence allows him to write with analytical distance and intellectual rigor, prioritizing long-term societal consequences over short-term technological enthusiasm. His writing is intended for policymakers, researchers, journalists, technologists, and readers seeking deeper clarity about the forces shaping the digital future.
Liam K. Dhar’s work contributes to a growing global conversation about digital dignity, technological sovereignty, and the ethical limits of automation. In an era where AI systems increasingly govern visibility, opportunity, and power, his analysis asks a fundamental question: not what technology can do, but what it should be allowed to do, and on whose terms.