FILTER RESULTS
FILTER RESULTS
close.svg
Search Result for “browsers”

Showing 1 - 8 of 8

LIFE

We know they know we know about privacy

Life, James Hein, Published on 03/12/2025

» Strap in, because today I'm covering what is going to be happening over the next few months to get even more data about you and what you are doing on the internet. Let's start with the recent exposure of around 3.5 billion phone numbers from WhatsApp, the private platform.

LIFE

The hidden dangers of AI

Life, James Hein, Published on 24/09/2025

» There's going to be a lot on artificial intelligence topics this week so let's get started. For the time being, the most common way to leverage an AI product is using a prompt of some kind. To that end, you will see lots of posts on platforms declaring that they have the best god-level prompts for large language models (LLMs). A prompt is something like, "What are the top ten songs from Depeche Mode?", or "Draw me a picture of a frog on a toadstool in the style of Alice In Wonderland with vivid colours". The more detailed and nuanced the prompt, the better the desired outcome tends to be. As with everything in the computer world, there are bad actors looking to take advantage of this.

LIFE

LLMs are left-leaning models

Life, James Hein, Published on 07/05/2025

» A while back I wrote about the political bias in Large Language Models (LLMs). Since then the models have evolved and David Rozado has conducted more recent tests based on four of the popular political orientation tests. Using the Political Compass, Political Spectrum, Political Correctness and Eysenck tests, he worked with xAI Grok 3 beta, Google's Gemini 2.5 pro, Deepseek V3, OpenAI GPT 4.1 and Meta's Llama 4 Maverick. In all but one of the tests Grok 3 was closest to the centre, and on average was the clear leader. All the models were still located in the Left Libertarian quadrant, with Grok just sneaking into a more Conservative area with the Eysenck test. These tests are of course but one way to measure the political leanings of any LLM. Overall however, it does still indicate the left-leaning bias in all models tested so far. If you want to see more details, you can visit David Rozado's substack.

LIFE

The future of AI is LAM

Life, James Hein, Published on 14/02/2024

» After my earlier article, I realised I was somewhat scant on what a Large Action Model (LAM), also called Large Agentic Models, are. As already mentioned, these have derived from the Large Language Models (LLM), or what people now refer generically as AI, discussed before.

LIFE

Moving images to the next level

Life, James Hein, Published on 16/03/2022

» Let's start this week with a couple of software and technology announcements. The first is from the developer Dominic Szablewski. He has developed a simple, new image format. You will have heard of PNG, JPEG, MPEG, MOV and MP4, which he calls complex. Enter the Quite OK Image Format (QOI). Most of the older formats are closed, need libraries and a lot of computing power to implement and use.

LIFE

You are being tracked and here's how to avoid it

Life, James Hein, Published on 02/02/2022

» I decided to spend some time this week addressing security and data sharing. This was prompted by me reading Permanent Record by Edward Snowden, watching Dr Robert Epstein on The Joe Rogan Experience, my son enrolling in a cyber security course and my experience over the years.

LIFE

Transpacific cable is cut, for now

Life, James Hein, Published on 16/09/2020

» In light of the problems between the USA and China, and that those in power in Beijing want to grab data from US networks, Google and Facebook have dropped plans to build an undersea cable between the US and Hong Kong. The new target limits the landing points to the Philippines and Taiwan and now excludes Hong Kong. The HK section of the cable is built but will not now be activated due to a national security agreement between the US and Google and Facebook. I will predict that if Joe Biden wins the next US election this decision will be revisited.

LIFE

Don't call AI bigoted

Life, James Hein, Published on 06/11/2019

» Despite what some claim, Artificial Intelligence is not racist. Google built a system to detect hate speech or speech that exhibited questionable content. Following the rules given, it picked out a range of people with what some try to claim was a bias toward black people. Wrong. The AI simply followed the rules and a larger number of black people and some other minorities, as defined in the US, were found to be breaking those rules. It didn't matter to the machines that when one group says it, it isn't defined as hate speech by some; it simply followed the rules. People can ignore or pretend not to see rules, but machines don't work that way. What the exercise actually found was that speech by some groups is ignored while the same thing said by others isn't. As the saying goes, don't ask the question if you're not prepared to hear the answer.