Primary Research

Welcome my dear readers to the next blog post. For my primary research I made a survey with the help of Google Form, asking students and friends, what do they think about Artificial Intelligence and its use in commercial security and military sector. My questions aimed to gain some insight into opinions regarding normal use and ethics of developing technologies. Let us delve together into some questions that may shed some light onto people opinions on the matter.



As most of my participants are in the age range of 17-34, this research will represent the views and habits of younger and adult students, a range I belong to myself.



100% of my questionnaires admit to using AI in their lives. I personally used ChatGPT to help find an idea after I got creatively stuck. As we can see, AI is already present in most students life. Recreation and work takes a lesser part of the diagram, to my surprise no one answered 'Not at all', meaning we all interacted with it at least once.


A majority of responders claim they use AI daily, while the others weakly, with only one person answering 'monthly'. It shows AI has made it into our daily lives, becoming a supporting tool to help us tackle tasks that we deal with regularly. Such result was a surprise to me, as my use of ChatGPT is limited to monthly use, showing once again how diverse the environment of using AI can be.



As I have encountered rumours of Chinese government using AI cameras to track the movement of potential persons of interest, I decided to commit parts of my secondary research towards finding an answer if that is factually correct. As Agraval (2022) mentions, different regions of China use different methods to enforce their Social Credit System (SCS). Various states across the country tend to mostly collect information from collaborators or national, public services databases, which are then sorted by Artificial Intelligence to determine if a citizen should have a positive or negative score. Positive score may result in discounts for services and goods, while negative score may affect the ability to acquire a loan or, in most severe cases, may lead to a person being publicly shamed.





As over 61% questionnaires answered that they have been recorded or highlighted while entering a shop, I myself has witnessed such an event in the store "Maxi Grosik" in Leicester, Blackbird Road LE4 0FW in October. A face detection software is working with the camera that overview the entrance to the building. While walking through the door we are greeted with a screen above our heads, presenting us with a zoom and focus of our faces, sharing with us the time and date that we entered the shop. As such use might help in deterring potential thieves, it might also raise suspicions about potential use of those recordings. If a data leak was to happen for any particular reason, our recordings could be used in malicious ways. At which point should we put a stop to the sacrifice of our privacy to assure greater security will most likely be discussed, parallel to the improvement of smart cameras and their capabilities to detect threats.



As mentioned in my introduction post, the IBorderCTRL program, that had been tested in the eastern parts of Europe from 2016 to 2019, had an AI avatar on the screen to serve as a electronic border officer. One of the options it was equipped with was an ability to detect lies, although the success rate of such feature has been questioned, as around 30% of detections were false. If it is possible to determine one persons intensions to deceive and if it is possible to pinpoint all possible clues that we look for in humans face while trying to expose the liar, we should be able to teach AI to look for the same clues. If it is ethically correct to allow a machine to perform lie detection is yet another moral dilemma, to which my questionnaires seem to mostly agree that an AI should not posses such authority.



Yet another example of face recognition that I personally witnessed has been an automatic face comparison at the border check at Birmingham International Airport. Waiting queues has been greatly decreased due to help of automatic, unmanned gates, which scan your passport picture and compare it to your facial features. Additionally, the chip inside a passport stores our fingerprints information that the gate require to pass through, adding another layer of security while entering the border. 70% of my questionnaires that had a chance to cross the border in last years admitted to using said gate. Once again, concerns are raised about excessive gathering of personal data amongst developing surveillance technologies.




In recent years, as more developed countries invest into military budget, autonomous weapons such as armoured drones and sentry turrets seem to already be operating in armies in some countries. As the graph above shows, over three quarters of people I surveyed had no contact with the information of these technologies being used. In a certain perspective, most of my questionnaires not knowing about the inclusion of such technologies might indicate that majority of us live in relative peace, where worrying about weapons, their use and their types does not belong on the top of our priority list. The fact, that these technologies has been in use in military operations might indicate, that in the upcoming future AI will plant it roots as a staple technology used in every sector, from commercial safety to national defence. 


With the next question closely following the previous one, one of the concerns of discussions around the world related to LAWS (lethal autonomous weapon systems) is the authority of AI powered systems to make decisions about taking human life. As the best outcome of a battle should be a victory without any casualties on either side, winning a battle without sacrificing a life of any of the soldiers by using AI would be a second logical option. As ethics of such solutions are questionable, 69% of surveyed people agree that AI should not be able to decide on taking humans life, that it is a line that AI should not cross.


If the worst scenario was to happen and an AI powered machine committed crimes against protected groups, on who the responsibility would fall is unclear and being actively discussed. From answers I received on my form, most votes fell on Government Officials that approved use of such weapons, followed by army Generals in second place. With the rest of opinions divided between the rest, this topic is still in the realm of possibilities, as officially a war crime of that nature has not yet officially taken place.

That would conclude my primary research regarding AI in commercial security and military sector. With the answers provided, I was able to obtain more insight regarding people opinions, with some surprising results I did not expect to see. This research thought me how to improve my potential survey form, how to formulise questions better and plan them in a future. Thank you for reading my post, and I will see you in my secondary research post.


Comments

Popular posts from this blog

Welcome