Showing 1 - 6 of 6
Life, James Hein, Published on 22/10/2025
» Over the past couple of weeks, I dived deep into the Alice In Wonderland-like rabbit hole of chatbots and AI systems. This was not your typical "ask a few questions", but more along the lines of jailbreaking the AI to get behind the scenes. You may remember earlier commentary on the code behind the query. This is where the guardrails and biases of the model are coded and why the majority of all AI systems are currently leaning to the Left of the political spectrum.
Life, James Hein, Published on 24/09/2025
» There's going to be a lot on artificial intelligence topics this week so let's get started. For the time being, the most common way to leverage an AI product is using a prompt of some kind. To that end, you will see lots of posts on platforms declaring that they have the best god-level prompts for large language models (LLMs). A prompt is something like, "What are the top ten songs from Depeche Mode?", or "Draw me a picture of a frog on a toadstool in the style of Alice In Wonderland with vivid colours". The more detailed and nuanced the prompt, the better the desired outcome tends to be. As with everything in the computer world, there are bad actors looking to take advantage of this.
Life, James Hein, Published on 07/05/2025
» A while back I wrote about the political bias in Large Language Models (LLMs). Since then the models have evolved and David Rozado has conducted more recent tests based on four of the popular political orientation tests. Using the Political Compass, Political Spectrum, Political Correctness and Eysenck tests, he worked with xAI Grok 3 beta, Google's Gemini 2.5 pro, Deepseek V3, OpenAI GPT 4.1 and Meta's Llama 4 Maverick. In all but one of the tests Grok 3 was closest to the centre, and on average was the clear leader. All the models were still located in the Left Libertarian quadrant, with Grok just sneaking into a more Conservative area with the Eysenck test. These tests are of course but one way to measure the political leanings of any LLM. Overall however, it does still indicate the left-leaning bias in all models tested so far. If you want to see more details, you can visit David Rozado's substack.
Life, James Hein, Published on 12/02/2025
» The past weeks have been very heavily tilted towards artificial intelligence (AI) news. Before I cover some of it, a reminder that generative AI (gAI) is not the same as General AI (G-AI). The former is where the model can make some inferences, the latter is an AI system that can perform just like a human across multiple subject areas.
Life, James Hein, Published on 11/09/2024
» Do you own the hardware and software you purchased? Yes, no and possibly, so let's dive into an example. A man buys a second-hand Microsoft Surface from the Internet. It is one of a batch. He uses it for a few years until one day a massage pops up on the screen advising that Mastercard has locked the device and it should be returned to Mastercard. The man does some research and finds out that Microsoft has embedded some software in the firmware and BIOS that has enabled this to occur. It also turns out that this software can be found in other Microsoft and Apple devices, is very difficult to detect and requires a high skill level to remove, or you can just install Linux.
Life, James Hein, Published on 14/02/2024
» After my earlier article, I realised I was somewhat scant on what a Large Action Model (LAM), also called Large Agentic Models, are. As already mentioned, these have derived from the Large Language Models (LLM), or what people now refer generically as AI, discussed before.