Modern AI in Class Warfare

Image from Wikipedia

Image from Wikipedia

   In 1966 educated, politically active people formed a group. Their ideals were education and social welfare. They facilitated those social reforms by organizing meetings. They conducted classes on how to run an economically viable household. They held free breakfast for poor children and after school education programs. They also came to the aid of the poor in their neighborhood when people were under threat of physical violence.
    In 1967 the FBI initiated a covert action against the aforementioned group in order to end its existence. Those covert actions included but were not limited to the use of media disinformation, legal harassment, targeted assassinations, and spies who infiltrated the organization and spread disunity.
    By 1980, that political party had only 27 members. Their goals of social reform and the end of systematic disenfranchisement of their community had officially failed. That group of political activists was called the
Black Panther Party.


    The techniques of Divide and Conquer were known in the Roman empire and often used against any civilization they encountered that was hostile. The in 1967 the FBI modernized some of those techniques and applied them to groups they determined to be hostile to the Interests of the United States.

    I made a movie called ALGORITHM that talked about a database the NSA had built that gathered all Internet and radio communications and stored them. That database was then accessible to the NSA through the use of modern data-mining techniques, a tool we more commonly call a search engine.
    The NSA had what many may consider a valid motivation for gathering that data and making it easily accessible through a search engine. I wrote essay called
Baidu and the Great Cannon explaining that motivation. Feel free to read it and come back.
    Many people question validity of the NSA’s motivation because of how frequently the level of power the aforementioned database gives. Put another way, the leadership of today may be trustworthy, but if because of that trust, we give them tools that empower them to the extreme, what happens when another takes their place of leadership. That next person/people may not be as benevolent
    The fear of imbuing leadership with too much power is historically well-founded. We don’t need to look further than the aforementioned FBI, or the East German

    There are many kinds of artificial intelligence. And this topic could be written about for the rest of your and my natural life. Tons of scientific papers, philosophical speculation, and science fiction has been written exploring this topic and it’s many moral implications.
    But, for the sake of simplicity, I’m going to limit the topic to two main kinds of artificial intelligence. Deep Learning AI and Strong AI.

    Strong AI is still in the realm of fiction. This is when a computer becomes self-aware; this is when the machine becomes conscious. In the early days of computer science, we thought this would be easy because we thought we understood consciousness. But, attempting to make Strong AI has taught us we actually know very little about human consciousness, which also revealed that we’re nowhere near creating self-aware AI. At least, not on purpose.
    People like Elon Musk and some others are afraid of Strong AI, largely because they anthropomorphize AI, giving the AI human tendencies or fears. That logic is flawed because, unlike humans, AI will not have the evolutionary survival mechanisms built into it that makes humans tend toward violence.

    Another possible reason to be afraid of Strong AI is because it may not properly value human life. It may choose to optimize for something that doesn’t involve the survival of the human species. Put simply, it may wipe out humanity because it doesn’t take us into account.
    I strongly doubt that either fear-based scenario will happen. I think, in the case of a human/AI war, the AI will just leave Earth. We need the biosphere to survive. A machine does not. From a purely utilitarian perspective, the AI would have to exert less effort to survive by leaving the planet than by killing most/all humans.

    It’s quite possible that strong AI will emerge from a deep learning AI, but since we don’t know where it’s going to come from yet, for now I’m going to assume they are separate categories.
    Deep Learning AI is a pattern recognition program that studies vast quantities of data in order to find patterns. For example, many tech companies that also deal with images like to tout their ability to recognize faces in images. The way that’s done is by showing the Deep Learning AI millions, maybe billions of images that include people’s faces. Slowly, over time the AI learns the difference between a human face and clouds, or trees, or a dog or anything else.
    Then there’s the way credit card companies use deep learning. Why would Visa need to use AI? Remember, deep learning has, at its core, pattern recognition. Visa is in the business of making money. It does that by shifting money from one account to another and it charges a transaction or processing fee for that.
    Built into U.S. law is the fact that should Joe trick Visa and pretend to be Silvia and spend Silvia’s money, Visa is liable and Silvia is not. Joe didn’t trick Silvia; he tricked Visa.
    In order for Visa to protect itself from Joe’s fraud they set up a Deep Learning AI that monitors purchases by Silvia and begins to establish how, when, where, and what she buys. It then uses that pattern to search for anomalies.
    If Silvia lives in Orange County, (like I do) and purchases instant Chai Latte at Trader Joe’s in Costa Mesa, on Friday, (like I did) and within 15 minutes, also purchases gasoline in San Jose (400 miles North Northwest of Costa Mesa), the Deep Learning AI will recognize the strangeness of that. It will flag the gas sale. Silvia won’t be able to use her Visa until she calls Visa, verifying each of her recent purchases. Or, Silvia will tell Visa she bought the Chai but didn’t buy the gas, at which point Visa issues Silvia another credit card with another credit card number.
    Visa will save money because Joe’s fraud will have stopped with one tank of gas and not the home-automation system he was planning on purchasing next.
law-enforcement applications of Deep Learning AI should be immediately clear. However, many of those applications involve more complex human behavior and psychology, so I’m not going to go into it here.

    On September 6, 2016 NASA engineers predicted how society will collapse if we keep doing what we’re doing. It’s the same as Nick Hanauer’s TED talk. And, it’s what caused the French Revolution. They all say that we are not safe against the things that have taken down other prosperous societies. If too much power/wealth/prosperity is collected by too few people while leaving the vast majority of people without or with little, the poor rise up and kill/evict the elites.
    However, in this iteration, the elites have access to Deep Learning AI so things might go a little different.

    The following is a fictional but possible explanation of current events.
    Deep Learning AI is expensive because it requires an entire data-center, each of which has a pricetag of about $2 billion. However, for a multi-billionaire, or someone who is so wealthy that they no longer measure their wealth/power in money, such things are affordable.
    The elite may normally use the Deep Learning AI to make stock purchases, which will quickly pay for itself.
That activity is common enough and and is now and is fairly known. In fact, it’s created several largely unexplained stock crashes that happen so quickly and involve so many transactions that their sources can never be traced.
    Now, take that stock-trading Deep Learning AI and point it toward humanity. Social unrest begins to manifest because poor people are upset with how the elites have become. The Deep Learning AI analyzes the normal activities of the poor and detect anomalies, which it then flags for a closer look. It finds the next leader of the the Black Panther Party, or Martin Luther King Jr. or Malcolm X, or U.S. Gandhi.
    In the Introduction, I mentioned how the FBI covertly took down the Black Panther Party. It was crude. It was public. It was sometimes violent. And it made a lot of people really lose faith in the ability of the FBI to be on the side of good.
    The elites know that their power exists only because and so long as people continue to give them power. The moment people stop believing the elites have power, their power is gone and all their comforts, and possibly their lives, end. Thus, it’s in the interest of the elites to exert whatever form of control they use in a way that doesn’t wake people up to the fragility of elite authority.
    I’ve mentioned this in previous posts, but it’s applications are so broad in modern society that it’s worth repeating: when humans experience extreme emotions, our brains release a chemical that paralyzes our frontal lobes. It makes us incapable of reason and complex throught. That means, when we’re angry, we’re not methodical. Method/tactics are why we can capture lions. They’ve got anger, strength, stealth, superior senses of smell and hearing. But, we have tactics, which means tools and the proper use of those tools.
    If a Deep Learning AI detects that a people-group is threatening to rise up and disrupt the U.S. way of life, all the elites would need to do to make sure that damage either doesn’t happen or is very localized/limited, is to present the aforementioned people-group with something that makes them furious. That fury will override their ability to organize and reason. The poor won’t be methodical, and so they won’t win their freedom/power.

    While the story-time is hypothetical, the scenario is happening now. It may not be run or controlled by elites, but it is definitely happening. Twitter, Fox News, CNN, even NPR all go from one story to the next, each story more emotionally charged than the prior. The result is that most people are in a constant state of extreme emotions.
    The news story might be
Brock Turner the Rapist getting only 3 months in jail for raping a woman. It might be the Cincinnati Zoo shooting a gorilla when a child falls into the gorilla’s enclosure. It might be Black Lives Matter stories, about police shooting innocent black people. All those stories are important and indicate social inequalities that must be addressed.

As a society, when we constantly go from one extreme emotion to the next, we paralyze ourselves–we limit our growth and progress. The warfare on economic equality may not be intentional or organized. The war may just be with human tendencies and entropy. But it’s a war nonetheless. As long as people stay angry/scared/hopeless change will not happen.