Engineers Had Better Be Careful with Algorithms AND AI

Engineers Had Better Be Careful with Algorithms AND AI

Engineers might not be accomplishing the wonderful things they do without the science of 9th century Persian scholar Muhammad ibn Musa al-Khwarizmi. He lived from 780 to ~850 AD in Baghdad, Iraq. Centuries after his death, Musa al-Khwarizmi's works introduced Europe to decimals and algebra. His algebra is regarded as one of the foundations of all the sciences.

The Latinized version of his name became a popular term: algorithm.

This term has recently taken on public scrutiny and some very menacing implications. The roaring success of companies like Google is founded upon algorithms that search for and rank web pages. There is now a huge concern about the power of technology companies and their algorithms and about reported massive efforts by certain nations to use these same methods to influence U.S. citizens and elections with false information.

Muhammad ibn Musa al-Khwarizmi wrote of the 'calculus of resolution and Juxtaposition,' more briefly referred to as al-Jabr, or algebra.

Using algorithms and artificial intelligence (AI), anyone can produce a website which gathers news from other sites and then filters that information for any particular perspective. Or, if they have enough cash, they can buy an already famous website and then tune its content. Add a few vague references and a couple outright lies and you can have a powerful influencer—especially for people who prefer to read only brief bits of news and don't have the time to attempt to get the full story.

Engineers themselves need to be wary of search algorithms and AI. Any company can outsource to another company that will guarantee to generate an uptick in sales of product X—not by advertising to inform people about the product, but by manipulating information on the web associated with that product. And by generating the infamous 'buzz.' Especially since it is outsourced, it can be very tempting to executives. The method is fraudulent. But it's soft-fraud and very difficult to prove.

Maybe certain websites and TV channels should not be allowed to call their content "news" unless it's unbiased information—which is essentially what news really is. All content with the filtered, biased stuff would have to be called 'perspectives' or 'viewpoints', rather than news.

All the social media sites are now trying to screen out so-called fake news, but it's, at least somewhat, impossible. If the posting contains lies, how are you going to tell? What's the criteria and where's the fine line exactly? They may be able to catch the really gross ones.

Maybe some enterprising engineer can create an AI program to scan all 'news' throughout the web to remove the fake stuff. But really, how can you or the AI program decide? EEs (and journalists) should be very careful about what they pass on, like, or forward. Don't be part of the problem.

What do you think? Can we produce algorithms to get really good at rooting out the false stuff? And, isn't that really dangerous? Who gets to decide what's false?