AI Transparency Dashboard

As a creative technologist, I am sharing my AI Transparency Dashboard with the intention of providing transparency about my approach and my principles when it comes to AI, and to encourage thoughtful reflection of how we use AI in our personal and professional lives. This dashboard will evolve because the landscape evolves, and so do I.

I first learned the term “AI Transparency Dashboard” from Sandy Carter at SXSW Sydney 2025. She predicts that more organisations will be implementing AI Transparency Dashboards as part of their business strategy because it should show things like what the data sources are, how bias is being managed, and supply chain audit trails. This type of transparency improves internal processes, builds trust, and allows people to make more informed choices.

While AI Transparency Dashboards are aimed at organisations, I think individuals should also consider making their own—not necessarily for public viewing—but as part of personal tech hygiene and responsible deployment practices so that we can be more intentional about how we use technology. Consider creating your own to better understand and manage your relationship with AI.

My principles

First and foremost, my synthesis, my writing,1 and my work product are my own unless otherwise specified. I have spent too long cultivating my ‘voice’ to relinquish it to an outsourced entity.

I am tech agnostic and work on the principle of “the right tools, for the right purpose, at the right time”.

I do not ask AI to do anything I don’t know how to do or am unable to verify.

Personally identifying information (PII), private data, proprietary materials, and things that do not belong to me and are not intentionally Internet accessible should not be uploaded to cloud-based or third-party AI services.

My clients

My priority with my clients is to ensure they have the tools and the knowledge to achieve their objectives. Technology may not always be right solution. For that matter, I may not be either and will say so when necessary. AI may sometimes be the right solution, but only if we both agree it is fit for purpose and the client wants it in their supply chain.

The problem space and the solutions need to be in alignment with both the client’s values and my own.

My stack

Large Language Models (LLMs) contribute to the infrastructure I have set up to assist with orchestration and administration of my work and life management—They help with the scutwork so I can get on with the real work. I try to use local LLMs where possible for privacy and control over my supply chain and these form part of my sovereign knowledge workbench.

As I mentioned before, my words and work product are my own, but I use local LLMs to help me interrogate my thinking, for developmental editing, and to roast me so that I can do what I do better.

The exception to my writing rule at the moment is to use LLMs to help shape my social-media posts promoting the content I create to match the algorithmic needs of each platform. I am not naturally algorithmically performant and do not have the time or energy to adapt to the ever-changing whims of social-media platforms. I am experimenting with solutions for this because my pragmatism and my ‘voice’ are presently in conflict.

As much as it pains me, Enshittification—the degradation of the quality of products and services online for profit—has meant that traditional search engines are becoming increasingly mediocre at their job. When they fail me, I use Perplexity.ai for research, Claude for technical work, and Google AI Studio for things the other two don’t do.

Part of my work involves investigating tools to be able to support my clients and for professional development. They will float in and out of my stack as needed. In other aspects of my life, I do what I can to minimise unnecessary AI use and consumption. I also monitor my supply chain to the best of my ability based on the information and functionality made available.2

In my sovereign knowledge workbench, my local LLM stack currently consists of:

  • Qwen 3 for indexing, roasting, and developmental editing
  • Qwen 2.5 Coder for analysis, planning, and coding support
  • NVIDIA Parakeet v3 for transcription (Deepgram for cloud-based)

Where interesting to share, I will be writing about the development of my sovereign knowledge workbench in the Field Notes of my Substack Life, the Universe, and STEAM if you want to follow these explorations.


Footnotes

  1. I am a defender of the much maligned em-dash and Oxford comma.

  2. Know what would make this easier? AI Transparency Dashboards.