Personal principles
To kick things off, I want to start with my basic principles of usage when it comes to me personally using AI.
This can mostly be summed up in a few points:
- All content that I publish on this site will be written by me. I will not be using AI to write content for me, however, I will use it to help me during the ideation process. I think AI is very useful for getting an initial structure together but for a full article, I think it's important to have a human touch.
- Any code I generate will always have my oversight. I use GitHub CoPilot and similar tools a lot, they're pretty great for getting boilerplate code out there and super useful for getting answers to difficult questions (much more so than Google), however, I will always ensure that I have a full understanding of the code that I'm using and that it's not just copy and pasted.
- I will always be transparent about the use of AI. If I ever use AI to help me with a project, I will always be transparent about it.
AI in open source
The use of AI in open source is a tricky topic. While I'm on board with the idea that AI can empower people to be able to contribute to open source projects that they were previously unable to, I think it's important to ensure that we base our use of AI on the principles that we set out.
For me, these are:
1. Always be transparent
Whenever you use AI in a PR, be transparent that you've used it and where you used it.
This allows the reviewer to check the code provided specifically and ensure that it's not only up to their expected standard but also comes from an acceptable source (a lot of training data doesn't).
A good way to do this is to append [AI]
to the end of your commit.
2. AI shouldn't speak for you
AI-generated summaries aren't great, they are often too lengthy, make things up and can miss important context about your work.
Communicate your change clearly, it doesn't have to be free of spelling errors or be a perfect symphony of your work, but it should be clear and concise. A good way to do this is to use bullet points.
3. AI should be used to help, not replace
As Daniel also speaks about, an LLM should never think for you.
Use it as a tool to help you jump over individual hurdles, not to write your entire contribution. You should be able to understand and explain the code that you are writing.
As tools like GitHub CoPilot become more popular, I think it's important to create new ways of working that use AI as a helping hand, but you should be cautious that if you use it too much, you may end up handicapping yourself, this is something I have found myself while initially using CoPilot.
The wider view
There are a lot of great use cases for AI and it's sure to only get better as time goes on. That makes it more important to understand how we should use it and what we should do to ensure that we protect our users.
The BBC recently published their principles on AI, which I think is a great starting point for anyone looking to create a set of principles for their own use of AI.
They cover the BBC's main objectives of acting in the best interest of the public, prioritising talent and creativity, and ensuring that they are open and transparent about their use of AI.
Creating your own principles
As suggested in the The /ai 'manifesto, it's a good idea to create your own principles around the use of AI.
You'll find my variant of this page here.