Deloitte Graduate Student Research Program on Artificial Intelligence
In partnership with Deloitte’s Artificial Intelligence Institute, the Ted and Karyn Hume Center for National Security and Technology at Virginia Tech is seeking up to four graduate students pursuing degrees in artificial intelligence and related fields, to be hired as Graduate Research Assistants to conduct fundamental research for the 2021-22 academic year.
Selected graduate students will be advised by Hume Center and Commonwealth Cyber Initiative faculty who will assist the student in their research endeavors and coordinate research engagement with Deloitte.
June 30, 2021
Students must be enrolled and attending Virginia Tech during the 2021-22 academic year and in disciplines related to the following four research topic areas.
Research Topic Areas
AI Assurance and Cybersecurity
Project 1: Securing Artificial Intelligence Systems
Faculty Adviser: Feras Batarseh (Research Associate Professor, Commonwealth Cyber Initiative)
The adoption of Artificial Intelligence into systems and decision making processes requires that the AI software will perform as intended and is resilient to adversarial action. This research will consider the intersections of AI assurance and cybersecurity for enabling AI adoption. From a cybersecurity perspective AI enables new classes of attacks: poisoning, evasion, and model inversion are the most common. Simultaneously, AI assurance research is pushing new frontiers in ensuring systems are safe, secure, robust, explainable, and trustworthy. The expected outcomes of this research are new methods for assuring AI in the presences of an intelligence adversary.
AI and 5G
Project 2: AI for 5G Performance and Security Enhancements
Faculty Adviser: Abdul Rahman (Artificial Intelligence Testbed Director, Commonwealth Cyber Initiative)
This research will build on current efforts funded by Deloitte to develop attack graphs and attack surfaces for 5G. This in combination with new techniques being developed by Deloitte and Virginia Tech in reinforcement learning for attack graph generation will provide the baseline for this research effort. The research will characterize performance capabilities and attack surfaces for 5G technologies. This baseline characterization will support the development of automated defenses for 5G enabled networks and suggest improvements in performance. Outcomes of this research include new AI methods for 5G security and new capabilities that leverage AI for improving 5G performance.
AI for Test and Evaluation
Project 3: Training, Test, and Evaluation Methodologies for AI Enabled Mobile Platforms
Faculty Adviser: Laura Freeman (Director, Intelligent Systems Lab, Hume Center)
The development of autonomous mobile platforms creates a need for guidelines and methodologies by which to conduct training, test and evaluation to ensure that acquired systems are able to complete their designed missions in a verifiable and robust manner. In particular, when mobile platforms employ Artificial Intelligence in their decision-making process, it is essential to verify the performance, integrity, and limits of applicability of the offered systems. This research project will investigate fundamental research questions regarding AI assurance and reproducibility, that will ultimately inform the development of guidelines and standards by which tests and evaluation can be performed on AI enabled mobile platforms, such that they can be definitively evaluated on their ability to perform as designed and complete the mission.
AI for Fraud Detection
Project 4: Financial Fraud Detection
Faculty Adviser: Peter Beling (Associate Director, Intelligent Systems Lab; Professor, Grado Department of Industrial and Systems Engineering)
Fraud detection in financial transactions often requires discerning small signals in vast amounts of data. This research will explore the application of generative adversarial networks to augment and adjust training data distributions for various types of financial transactions. The project will also map the landscape for fraudulent transactions leveraging past work in variational auto-encoders to understand if unique features in latent space can provide sensitive indicators of fraud. Additionally, explainable methods for transactions using concepts of combinatorial coverage to determine how unique combinations of features in high dimensional space may provide insight to the validity of a transaction will be explored.
About the Partnership
Virginia Tech has partnered with Deloitte to foster experiential opportunities for students, enhancing research, communication, and professional skills that better prepare them to enter the workforce. More specifically, this collaboration enables graduate student-based research in the field of Artificial Intelligence, in order to simultaneously prepare graduate students for careers at institutions such as Deloitte, and to advance the immediate research and development goals of Deloitte. Graduate student education through participation in emergent research programs is a core mission of the Ted and Karyn Hume Center for National Security and Technology at Virginia Tech. This program is the next step in a continuing research collaboration between Deloitte and Virginia Tech. It builds and extends from an ongoing 5G security programs that include researchers from the Commonwealth Cyber Initiative that includes Old Dominion University and Virginia Tech.