|9:00-10:00||于挺||卡塔尔计算研究院/美国北卡州立大学||Synthetic Social Graph Generation with Local Differential Privacy|
|10:00-11:00||李康||乔治亚大学教授||Beyond Adversarial Learning — Data Scaling Attacks in Deep Learning Applications|
|11:00-12:00||张锋巍||美国韦恩州立大学助理教授||Towards Transparent Malware Debugging on x86 and ARM|
演讲主题： Synthetic Social Graph Generation with Local Differential Privacy
Valuable information resides in decentralized social graphs, where no entity has access to the complete graph structure. Instead, each user maintains a limited view of the graph. For example, each user maintains locally a contact list in her phone. The contact lists of all users form an implicit social graph that could be very useful to study the interaction patterns among different population groups. However, due to privacy concerns, one could not simply collected the unfettered local views from users and reconstruct the implicit social network.
In this talk, we present a technique to ensure local differential privacy of individuals while collecting structural information and generating representative synthetic social graphs. We show that existing local differential privacy and synthetic graph generation techniques are insufficient for preserving important graph properties, due to excessive noise injection, inability to retain important graph structure, or both. Motivated by these limitations, we propose LDPGen, a novel multi-phase technique that incrementally clusters users based on their connections to different partitions of the whole population. Every time a user reports information, LDPGen carefully injects noise to ensure local differential privacy. We derive optimal parameters in this process to cluster structurally-similar users together. Once a good clustering of users is obtained, LDPGen adapts existing social graph generation models to construct a synthetic social graph. Our experiments show that the proposed technique produces high-quality synthetic graphs that well represent the original decentralized social graphs, and significantly outperform those from baseline approaches.
In this talk, I will also briefly describe other cyber security research projects and opportunities in Qatar Computing Research Institute (QCRI), especially in the field of security data analytics.
Ting Yu is the research director of the cyber security group of Qatar Computing Research Institute (QCRI), Hamad Bin Khalifa University. Before joining QCRI in 2013, he was an associate professor in the faculty of Computer Science Department, North Carolina State University. He obtained his BS from Peking University in 1997, MS from Minnesota University in 1998, and PhD from the University of Illinois at Urbana-Champaign in 2003, all in computer science. He is a recipient of the NSF CAREER Award in 2007. His research areas focus on privacy preserving data analysis, data anonymization, and security data analytics.
演讲主题： Beyond Adversarial Learning — Data Scaling Attacks in Deep Learning Applications
This talk presents a new type of security risks in common deep learning applications. Deep learning applications, such as image classification and voice recognition, make strong assumptions about the data formats used by training and classifications. In this presentation, the speaker will demonstrate attacks that target the data scaling process in popular deep learning examples. By carefully crafting input data that mismatches with the scales used by deep learning models, the speaker will show how an attacker can successfully evade image classification even when applications use well-trained deep learning models (including GoogleNet with ImageNet data). At the end of this presentation, the speaker will also present a few potential defending strategies to detect or mitigate this data-flow attacks.
Kang Li is a professor of computer science and the director of the Institute for Cybersecurity and Privacy at the University of Georgia. Kang Li received a B.S in computer science from TsingHua University, a Master in Law from Yale, and a Ph.D in computer science from Oregon Graduate Institute. Dr. Kang Li’s research results have been published at academic venues, such as IEEE S&P, ACM CCS and NDSS, as well as industrial conferences, such as BlackHat, SyScan, and ShmooCon. Dr. Kang Li is the founder and mentor of multiple CTF security teams, including SecDawg and Blue-Lotus. He was also a founder and player of the Team Disekt, a finalist team in the 2016 DARPA Cyber Grand Challenge.
演讲主题: Towards Transparent Malware Debugging on x86 and ARM
With the rapid proliferation of malware attacks on the Internet, understanding these malicious behaviors plays a critical role in crafting effective defense. Existing malware analysis platforms leave detectable fingerprints like uncommon string properties in QEMU, signatures in Linux kernel profiles，and artifacts on basic instruction execution semantics. Since these fingerprints provide the malware a chance to split its behavior depending on whether the analysis system is present or not, existing analysis systems are not sufficient to analyze the sophisticated malware. In this talk, I will present the framework for transparent malware analysis, which leverages the hardware features in existing PC and mobile devices to increase the transparency of malware analysis. In particular, I will introduce MalT on the x86 architecture and Ninja on the ARM architecture. MalT uses the system management mode as the execution environment and performance monitor unit as hardware assistant to facilitate the analysis, whereas Ninja involves the TrustZone technology and embedded trace macrocell to improve the transparency. Moreover, both MalT and Ninja are OS-agnostic, and do not require modification to the operation system or the target application.
Dr. Fengwei Zhang is an Assistant Professor and Director of the COMputer And Systems Security (COMPASS) lab at Wayne State University. He received his Ph.D. degree in computer science from George Mason University in 2015. His research interests are in the areas of systems security, with a focus on trustworthy execution, transparent malware debugging, transportation security, and plausible deniability encryption. He has been published at top security venues including IEEE S&P, USENIX Security, NDSS, IEEE TIFS, and IEEE TDSC. He is a recipient of the Distinguished Paper Award in ACSAC 2017.