AI: Beyond anthropomorphic informatics
We need a better way of talking about the importance of these models
People get so excited when machines can do things that humans can do. So much so, that our classification of the current developments in regression algorithms are labelled according to how much anthropomorphism we apply to them.
In case you skipped that class at school, Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. If you’ve called your computer “stupid”, that’s anthropomorphism.
The investment hype-cycle at the moment is currently running in the following hierarchy:
Machine Learning - e.g. using a regression model to make decisions on when to buy and sell a stock
Artificial Intelligence - e.g. using a regression model to make decisions on whether a car should accelerate
Generative Artificial intelligence - e.g. using a regression model to predict the next words in a conversation.
I am over-simplifying in classifying all of these as regression models but they are basically predicting things based on things that have happened in the past, which was the essence of regression when I studied it.
What’s fascinating to me is that we are valuing most the applications that are most human. Consider asking a human to do each of the above:
Most children can write a sentence by the age of 7. Generative AI.
Most people can drive a car by the time they are 17. Artificial intelligence.
Few people can predict stock prices with any accuracy. Machine learning.
So why they hype about getting computers to do the tasks that can be done by children?
I think it’s twofold.
Firstly, it’s about anthropomorphism. We can relate to a computer creating an image and it really impresses us if it can do it better than we can. Predicting stock market changes accurately seems so far beyond our ability that we dismiss it in the way we might think about a machine that can move tonnes of soil. The task is so far out of our reach that we simply can’t contemplate it.
Secondly, it has the potential to have much more impact on a large number of people. Machine learning has ruined the career outlook of future stock-pickers because the quant hedge funds are making so much more money than humans ever could. But not many of us were ever going to be professional stock-pickers, and those who were are now all becoming experts in building statistical models to trade instead.
There are a lot of people who make a living from writing, photography, or art, and they are really worried about what’s going to happen to their jobs. If things go the way of the stock market then what they’re doing today is going to get donee by machines and they need to re-train themselves to learn how to build or leverage those machines themselves.
This is going to have an impact for sure, but to me seems like the logical progression from the Spinning Jenny or the word processor replacing the typing pool.
I’m much more interested in the examples where computers are able to achieve what humans really can’t. I can’t relate to computers working out how proteins fold but I understand that it’s going to massively improve the development of new drugs that can improve the quality of live of hundreds of millions of people. I don’t understand what is means that AI has just discovered 98% of all the stable crystal structures we’ve ever known about, but I known it’s going to have an impact on the creation of new materials that can make the world a better place.
It also blows my mind that a computer system can accurately predict stock prices, but I know that’s “just” machine learning.
We should spend less time thinking about how relatable technological advances are and consider more what impact they might have in the future.
Anthropomorphic informatics really is artificial intelligence, and we should ourselves be applying more intelligence to classifying the impact of these technologies.