By South China Morning Post at 3:40 PM 5/30/2020 (PDT)
Already, governments around the world control a vast amount of data collected through large technology corporations like Google and Apple. Being able to utilize this database and knowledge in the defense sector would destroy private rights as we know. The possibilities are limitless.
Take a look at China for example. They’ve rolled out a system where they use AI facial recognition to detect criminals and hope to use it everywhere. In doing so, hundreds of cameras would need to be put up in order to search for these criminals. Thousands of people would be scanned everyday, regardless of their consent, all in the name for safety. In China specifically, this will make it harder for anti-government activists to escape notice or for anyone to hide from the government. AI technology is weaponized here to invade the personal right to privacy of thousands and hundreds of people.
China provides a powerful example as to how AI technology in defensive and militaristic ways harms civilians. If they develop technology that is powerful enough to be weaponized against their opponents, what’s to stop them from using those same technologies and practices on their own civilians? By encouraging countries to develop Ai technology in the military, they also become encouraged to utilize that technology on their own citizens. However, there are some countries aware of this problem. Cuba, for instance, is worried about this and wants to ensure that any technology that gets developed is handled ethically by their population. This is a good first step forward, but there’s no guarantee that such promises will be followed.
In the distribution of AI technology, countries worry about, “making sure the destination of the AI is ... watched and looked out for as it can end up in wrong hands leading to more corruption and malpractice,” as stated by Pakistan. However, no matter what government possesses it, there’s always the chance that corruption already exists within that government. There is no way to know for certain that a government will use AI safely and ethically and that governments aren’t actually the bad hands that they fear.
To edit this page, click here.