AI Transparency Dashboard
As a creative technologist, I am sharing my AI Transparency Dashboard with the intention of providing transparency about my approach and my principles when it comes to AI, and to encourage thoughtful reflection of how we use AI in our personal and professional lives. This dashboard will evolve because the landscape evolves, and so do I.
I first learned the term “AI Transparency Dashboard” from Sandy Carter at SXSW Sydney 2025. She predicts that more organisations will be implementing AI Transparency Dashboards as part of their business strategy because it shows what and how AI is used, what the data sources are, how bias is being managed, and support supply chain audit trails. This type of transparency improves internal processes, builds trust, and allows people to make more informed choices.
While AI Transparency Dashboards are aimed at larger organisations, individuals and small businesses should also consider making their own—not necessarily for public viewing—but as part of personal tech hygiene and responsible deployment practices so that we can be more intentional about how we use technology. Consider creating your own to better understand and manage your relationship with AI.
My principles
First and foremost, my synthesis, my writing,1 and my work product are my own unless otherwise specified. I have spent too long cultivating my skills and my voice to relinquish it to an outsourced entity.
I am tech agnostic and work on the principle of “the right tools, for the right purpose, at the right time”.
I do not ask AI to do anything I don’t know how to do or am unable to verify.
Personally identifying information (PII), private data, proprietary materials, and things that do not belong to me and are not intentionally Internet accessible should not be uploaded to cloud-based or third-party AI-integrated services.
My stack
AI
Large Language Models (LLMs) contribute to the infrastructure I have set up to assist with orchestration and administration of my work and life management—They help with the scutwork so I can get on with the real work. I try to use local LLMs where possible for privacy and control over my supply chain and these form part of my sovereign knowledge workbench.
My local LLM stack currently consists of:
- Qwen 3 for indexing, roasting, and developmental editing
- Qwen 2.5 Coder for analysis, planning, and coding support
- NVIDIA Parakeet v3 for transcription
My remote stack currently consists of:
- Perplexity for research
- Claude for coding and developmental editing support
- Google NotebookLM for qualitative analysis on my publicly available content
- Deepgram for transcription
At this time, Claude is being used more heavily for the some of the tasks I intend for the local stack. I expect this to change as I get a better understanding of what Qwen’s capabilities are and how best to use it.
Where interesting to share, I will be writing about the development of my workbench in the Field Notes of my Substack Life, the Universe, and STEAM if you want to follow these explorations.
Other tools
My IDEs are not connected to remote AI assistants but have access to Qwen.
I use Adobe Photoshop and Adobe Express for photo and asset editing and creation. In the traditional sense, not the generative sense.
I am in the process of de-Googling from software that falls under the category of ‘productivity suite’ and am slowly migrating from related software services where AI policies are ambiguous or without user-level controls as I audit what’s in my stack.
My podcasting-related kit can be found here.
General notes
Part of my work involves investigating tools to be able to support my clients and for professional development. They will float in and out of my stack as needed. In other aspects of my life, I do what I can to minimise unnecessary AI use and consumption. I also monitor my supply chain to the best of my ability based on the information and functionality made available.2
As much as it pains me, Enshittification—the degradation of the quality of products and services online for profit—has meant that traditional search engines and other conventional resources are becoming increasingly mediocre at their job. When they fail me, I use Perplexity for research-specific tasks and Claude for technical and other. Their responses are verified before use.
My clients
My priority with my clients is to ensure they have the tools and the knowledge to achieve their objectives. Technology may not always be right solution. For that matter, I may not be either and will say so when necessary. AI may sometimes be the right solution, but only if we both agree it is fit for purpose and the client wants it in their supply chain.
The problem space and the solutions need to be in alignment with both the client’s values and my own.
Unless the client or the solution requires it, AI is not involved in the production of client deliverables beyond what is addressed in the General notes. No client-specific or proprietary information is provided to the services in my remote stack. To be honest, I don’t see a reason to provide it to my local stack either.
My work
As I mentioned before, my words and work product are my own, but I may use LLMs to help me interrogate my thinking, for developmental editing, and to roast me so that I can do what I do better. I am also currently experimenting with it for software development support (code and test) on my own projects as part of professional development.
The exception to my writing rule at the moment is to use LLMs to learn how to craft or re-shape social-media posts promoting the content I create to match the algorithmic needs of each platform. I am exploring solutions for this because I am not naturally algorithmically performant and do not have the time or energy to adapt to the ever-changing whims of social-media platforms. As with all writing, over time I expect I should improve and this workflow will stabilise3.