I Have a Confession

I have a confession. I didn’t plan to be an actuary.  I started my career designing and programing insurance computer systems.  In those days computers were the size of a room and we had to fit our programs in a space that was much smaller than the computing space in the normal phone today.  After recently seeing that a health insurance company was using AI to deny claims, I realized that I had been programing in AI decades ago – not.  It does beg the question of “What is AI?”  I believe I can now discern traditional programing from AI, although it has taken a lot of reading through blog posts.  The distinction also explains the time gap between my COBOL programming insurance systems and current artificial intelligence (AI).  AI needs mega computers and mega data.  Even with vast data, AI still does not appear to always get it right. That brings me to interesting facts that explains AI providing incorrect answers or “hallucinating.” 

  • Where does this mega data come from? For ChatGPT (and I am sure others), the source included Facebook and other social media. How many of your relatives are hallucinating when they post on Facebook?
  • Many current AI applications do not look for facts, they look for probabilities. AI may look at my ethnic background and without information to the contrary conclude that I have blue eyes and report that as a fact.  Just because most individuals with my ethnic background have blue eyes does not mean that I do.  AI fills in the blanks in the data with statistics, and statistics may not apply to individual cases.  It isn’t even a matter of garbage in garbage out, it is a matter of using statistics rather than facts in individual cases situations. 

I will have more to say on the potential and the limitations of AI in future posts.  Keep posted!