Ohio State is in the process of revising websites and program materials to accurately reflect compliance with the law. While this work occurs, language referencing protected class status or other activities prohibited by Ohio Senate Bill 1 may still appear in some places. However, all programs and activities are being administered in compliance with federal and state law.

TDAI Speaker Series Spring 2021 -- Zhiqiang Lin, Raef Bassily

Collage of speakers in the Spring 21 seminar series
March 11, 2021
12:00 pm - 1:00 pm
Zoom

Register herehttps://osu.zoom.us/webinar/register/WN_GjghUFbQQVGH9AXSebf8Ig


“Securing Data Analytics via Trusted Execution Environment”

Photo of Zhiqiang Lin

Zhiqiang Lin, TDAI Core Faculty, Associate Professor, Computer Science and Engineering, College of Engineering

In this talk, Dr. Lin will present a line of research on how to develop abstractions, tools, and SDKs, to ease the SGX programming and data-analytics. In particular, he will talk about SGX-BigMatrix that supports vectorized computations and optimal matrix based operations over encrypted data using Intel SGX, SGX-Elide that enables enclave code confidentiality via dynamic updating, and finally Rust-SGX that allows programmers to develop memory safe SGX applications atop Rust programming language.

 

 

“Harnessing Public Data in Privacy-Preserving Machine Learning”

Photo of Raef Bassily

Raef Bassily, TDAI Core Faculty, Assistant Professor, Computer Science and Engineering, College of Engineering

One of the most salient features of our time is the dissemination of huge amounts of personal and sensitive data. Differential privacy has emerged as a sound theoretical approach to reason about privacy in a precise and quantifiable fashion, and has become the gold standard of privacy-preserving data analysis.

Despite its remarkable success, differential privacy is a stringent condition that sometimes comes with various limitations leading to unacceptable accuracy guarantees in many machine learning problems. In this talk, I will present a more relaxed model of learning under differential privacy, where the learning algorithm has access to a limited amount of public data, in addition to its input private dataset. I will discuss algorithmic techniques we developed for this model and their formal accuracy guarantees. Our results show that, with a limited amount of public data, it is possible to attain the same level of accuracy attained by non-private algorithms, while providing strong privacy guarantees for the private dataset.

 

Events Filters: