2016

10 Steps to a Successful Career

November 9

Dr. Elizabeth Xu

Biography

Dr. Elizabeth Xu is SVP and Chief Architect at BMC, the 9th largest independent software company globally, driving technology directions for 2000+ engineers. She has driven global R&D teams as Group VP at Acxiom, SVP at Deem.com, RMS, and Vitria, and held a management position at IBM. She has served on boards and as a public company corporate officer. She won 7 Stevie Awards in 2013, 2015 and the Women of Influence award in 2015.

Dr. Xu has spoken at many conferences and companies including Google, Apple and Alibaba. Her highly praised book on Amazon, “Myths of the promotion,” is being used as a textbook at Stanford, where she has been teaching since 2010.

Dr. Xu received a Ph.D. and M.S. from the University of Nevada, Reno; a B.S. and M.S from Peking University, and SEP from Stanford.

Graph Algorithms on the Cray XMT-2

October 21

Dr. Shahid Bokhari, Independent Researcher

Abstract

The Cray XMT-2 (Extreme Multithreading) supercomputer is the latest incarnation of the Tera architecture (1998) and traces its lineage back to the HEP (1982). The machine has hardware support for 128 threads per processor, a flat shared memory without locality, and individually lockable 64-bit words. Currently, the largest machine available has 128 processors and 4 Terabytes of shared memory. It thus very suitable for implementation of graph algorithms that require "unstructured" access to a large memory space.

I will describe my experiences with implementing algorithms on this architecture. Examples include DNA sequencing (using de Bruijn graphs), influenza virus evolution (shortest trees) and image segmentation (maxflow-mincut). In each case I will show that there are no issues of problem partitioning or load balancing and that good performance can be obtained using ordinary C code with the addition of a few pragmas and machine intrinsics. The end result is that the user sees a familiar C/C++ programming environment into which the implementation details of parallelism intrude very occasionally, if at all.

The ease of programming of the XMT has lessons for the current crop of commodity multicore/multiGPU systems.

Biography

Shahid Bokhari has been working in the areas of parallel and distributed computing since 1975. He has held positions at UET Lahore, NASA's Institute for Computer Applications, University of Colorado and Ohio State University. He is a Fellow of the IEEE and of the ACM.


The DARPA Robotics Challenge - A "Cinderella" Story

Friday, April 22

Dr. Paul Oh, Lincy Professor of Unmanned Aerial Systems at the University of Nevada, Las Vegas (UNLV)

Abstract

The DARPA Robotics Challenge (DRC) kicked off in fall 2012. Driven by lessons learned in Fukushima, the DRC served to significantly advance the state-of-the-art in disaster-response robotics. Unprecedented events required a single robot to drive a vehicle, climb a ladder and drill through walls. Furthermore robots were to operated untethered and degraded communication settings. This talk presents a behind-the-scenes "Cinderella" story for the humanoid, DRC-Hubo. The story underscores the value of international collaboration, open-source architecture and a crowdsourced approach - all of which were critical differentiators for the team and led to victory.

Biography

Prior to joining UNLV, Dr. Oh was with Drexel University's Mechanical Engineering Department from 2000-2014 where he founded and directed the Drexel Autonomous Systems Lab. He received mechanical engineering degrees from McGill (B.Eng 1989), Seoul National (M.Sc 1992), and Columbia (PhD 1999) universities. Honors include faculty fellowships at NASA Jet Propulsion Lab (2002), Naval Research Lab (2003), the NSF CAREER award (2004), the SAE Ralph Teetor Award for Engineering Education Excellence (2005) and being named a Boeing Welliver Fellow (2006). He is also the Founding Chair of the IEEE Technical Committee on Aerial Robotics and UAVs. From 2008-2010, he served at the National Science Foundation (NSF) as the Program Director managing the robotics research portfolio.

He has authored over 100 referred archival papers and edited 3 books in the areas of robotics and unmanned systems. He serves as Editor for several leading robotics publications including Springer-Verlag's Journal of Intelligent Robotics and Systems and the Journal of Intelligent Service Robotics. He also served as Director for the NATO Advanced Studies Institute (ASI) in 2010 on Unmanned Systems which gathered researchers from over 20 countries to capture the state of the art and formulate research roadmaps. In 2012, he served as Program Chair for the flagship conference for the academic robotics community, the IEEE International Conference on Robotics and Automation (ICRA), held in St. Paul, MN, USA. For the DARPA Robotics Challenge, he served as lead for Team DRC-Hubo (2012-2013) and Team DRC-Hubo@UNLV (2014-2015).

In recognition of his international partnerships and impact on US research and education, he was one of three Distinguished Lecturers invited by the National Science Board to speak at their 60th Anniversary in 2010.


Fast & Furious: Accelerating Parallel and Distributed Computing for Big Data

Thursday, March 17

Dr. Feng Yan

Abstract

Big data has changed the way we utilize computing resources. Parallel and distributed computing has become the promising direction to go for big data processing as single-core or single-machine can no longer meet the ever-growing computing requirements. With the prominence of big data, workloads, resources, and computing objectives are undergoing significant changes. These changes bring new challenges for designing, implementing, and optimizing parallel and distributed computing frameworks and systems. Developing effective and adaptive methodologies and tools to tackle these challenges become vitally impotent for efficient big data processing. In this talk, I will focus on deep learning, a big data application that has attracted great attention in both academia and industry due to its great potential in many domains, such as image, speech, vision, and language understanding. I will demonstrate why parallel and distributed computing plays a crucial role and how modeling and system techniques can help improve performance and efficiency. I will illustrate this premise through examples including:

  • Accelerating distributed deep learning through performance modeling and scalability optimization.
  • Accelerating deep neural network serving through judicious parallel configuration choices and efficient scheduling.

Biography

Feng Yan (http://www.cs.wm.edu/~fyan/) is a Ph.D. candidate at the Computer Science Department at the College of William and Mary. He has done internships in Microsoft Research in 2014 and in HP Labs in 2013. Feng Yan’s main research projects have been focused on improving the performance and efficiency of parallel and distributed computing frameworks and systems for big data processing using various modeling and system techniques. He closely collaborates with industry partners (e.g., Microsoft Research, HP Labs, IBM Research, EMC, NetApp) to solve big and important problems, ranging from data center infrastructures to cluster computing frameworks (e.g., Hadoop, Spark) to large-scale data-intensive computing systems (e.g., distributed deep learning systems). His research outcomes have been published in premiere venues (more than 20 publications in 5 years), turned into patents and software prototypes. Feng Yan is also actively serving the research community. He has served as TPC member for ESEC/FSE (artifact evaluation track), IEEE BigData, ALLDATA, DATA ANALYTICS, and as reviewer for about 20 different journals and conferences, including IEEE TCC, ACM TOS, IEEE TII, ACM SIGMETRICS, IFIP Performance, IEEE ICDCS, IEEE/IFIP DSN, USENIX ICAC, ACM/SPEC ICPE, IEEE/ACM CCGrid, IEEE IC2E.


Privacy Preservation in Smart Grid: Cases of Vehicle-to-Grid and Smart Meter Communications

Tuesday, March 15

Dr. Kemal Akkaya

Abstract

The Power Grid is passing through a massive transformation to enable various smarter applications that will not only increase its resilience but also reduce the overall costs. This is enabled by providing a contemporary underlying communication infrastructure which will connect every component with each other including electric meters and cars. In this way, meters will provide fine-grained data about the power usage in certain neighborhoods which will enable efficient management of power flows. The use of electric vehicles (PEVs) will promote the adoption of intermittent renewable energy sources by acting as energy storage systems. In this way, PEVs can inject power to the Smart Grid during periods of reduced production to balance demand. For both smart meters and EVs, the data will be collected using wireless communication capabilities (e.g., WiFi or LTE) and will be available for analysis by the Grid operators. The collection and storage of such data raise privacy issues that might lead to exposure of consumer's living and driving habits. This talk will focus on the privacy aspects of vehicle-to-grid (V2G) and smart meter communications. We will then present approaches to preserve privacy in both settings by focusing on wireless protocol design. Specifically, we present a data obfuscation method for smart meters and a privacy-preserving framework for power injection from EVs to the grid. The talk will conclude with other ongoing security-related projects in ADWISE Lab at FIU.

Biography

Dr. Kemal Akkaya is an associate professor in the Department of Electrical and Computer Engineering at Florida International University. He received his PhD in Computer Science from University of Maryland Baltimore County in 2005 and joined the department of Computer Science at Southern Illinois University (SIU) as an assistant professor. Dr. Akkaya was an associate professor at SIU from 2011 to 2014. He was also a visiting professor at The George Washington University in Fall 2013. Dr. Akkaya leads the Advanced Wireless and Security Lab (ADWISE) in the ECE Dept. His current research interests include security and privacy in Internet-of-things and cyberphysical systems, software defined networking, and topology control in sensor networks. Dr. Akkaya is a senior member of IEEE. He is the area editor of Elsevier Ad Hoc Network Journal and serves on the editorial board of IEEE Communication Surveys and Tutorials and Sensors Journal. He has served as the guest editor for Journal of High Speed Networks, Computer Communications Journal, Elsevier Ad Hoc Networks Journal, and in the TPC of many leading wireless networking conferences including IEEE ICC, Globecom, LCN and WCNC. He has published over 120 papers in peer reviewed journal and conferences. He received the "Top Cited" article award from Elsevier in 2010.


Discovery to Innovation

Friday, March 11

Dr. Babu DasGupta

Abstract

Following a brief summary of success stories resulting from fundamental research, the rest of my talk will cover an overview of the various innovation programs at NSF. The specific topics include the Industry/University Cooperative Research Center ( I/UCRC), Grant Opportunity for Academic Liaison with Industry (GOALI), Accelerating Innovation Research (AIR), Innovation Corps (I-Corps) programs, and the Small Business Innovation Research (SBIR) program.

Biography

Dr. Rathindra (Babu) DasGupta joined the National Science Foundation (NSF) in June 2006 as a program director in the division of industrial innovation and partnerships for the Small Business Innovation Research program. DasGupta is currently the lead program director for the Industrial Innovation and Partnerships (IIP) academic partnerships cluster. Before joining NSF, DasGupta was the chief scientist for the CONTECH division of the SPX Corporation. He was also the technical director at the Meta-Mold division of the Amcast Industrial Corporation. Prior to joining the industry, DasGupta held various professorships at the Milwaukee School of Engineering, UW-Madison, UW-Milwaukee and Western Michigan University. DasGupta has received multiple awards and honors including the Raymond D. Peters Endowed Professorship in Materials Science at the Milwaukee School of Engineering (1987-1990), the Inland Steel-Ryerson outstanding undergraduate teacher award at the Milwaukee School of Engineering (1985), the Herman H. Doehler Award from the North American Die Casting Association (2000) and the Innovation Award at CONTECH (1997). He had the honor of being the ASM-IIM visiting lecturer to India in 2000 and has been granted the title of NAI Fellow (2013). In the summer of 1985, DasGupta was also invited as a visiting scientist to China Steel Corporation in Kaohsiung, Taiwan. DasGupta has published numerous papers and presented at various international and domestic conferences, and he has five patents to his credit.


Supporting big data applications on high-performance computing systems at extreme scales

Thursday, March 10

Dr. Dongfang Zhao

Abstract

Many big data analytics tools have emerged to meet the increasing need of data-intensive applications in data centers. Yet, most of those tools are optimized for commodity hardware and have not been ported and optimized for many hardware features that can be found in high-performance computing (HPC) platforms such as fat nodes with a large amount of memory, RISC processors, and DMA networks. In this talk I will present some recent efforts made in joint projects between universities and national labs to support data-intensive applications on extreme-scale HPC systems. I will start with introducing FusionFS---a new distributed system designed for data-intensive scientific applications on supercomputers. FusionFS employs many features not commonly seen in conventional systems: distributed metadata management, optimized I/O throughput for checkpointing, dynamic file chunking, cooperative caching, lightweight provenance, GPU-accelerated data redundancy. I will also discuss several future directions of my research, such as customizing Spark for HPC machines (sponsored by the U.S. Department of Defense) and designing a distributed database for scientific applications on HPC systems (collaborated between the Pacific Northwest National Laboratory and the University of Washington).

Biography

Dr. Dongfang Zhao is a postdoctoral researcher at the Pacific Northwest National Laboratory, Richland, Washington. His research interests span big data systems, machine learning, cyber security, high-performance computing, and cloud computing. Dr. Zhao currently serves on the Editorial Board of Journal of Big Data (Springer) and Co-Chairs of several international conferences including IEEE NAS'16, ACM ScienceCloud'16, and IEEE/ACM BDC'15. He has authored and co-authored over 30 peer-reviewed publications including prestigious journals such as IEEE TPDS and IEEE PAMI as well as leading conferences such as IEEE IPDPS and ACM CCGrid. He received his Ph.D. in computer science from the Illinois Institute of Technology, Chicago, Illinois.


Application of Dynamic Signal Processing and Pattern Classification Techniques to Brain Imaging Data

Friday, March 4

Dr. Unal Sakoglu

Abstract

In this talk, research results from various neuroimaging modalities such as functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) will be presented. Specifically, a dynamic sliding-window-based method, dynamic functional connectivity (DFC), which was developed recently by our group in order to asses temporal dynamics of functional connectivity among different brain networks will be of the focus. DFC provides more information than the static FC, and DFC-based features can lead to better classification of brain diseases or conditions when compared with static FC-based features. Analysis and classification results from neuroimaging data will be presented.

Biography

Dr. Ünal "Zak" Sakoglu is currently an Assistant Professor at Computer Science Department, Texas A&M University at Commerce. He had his BS in Electrical-Electronics Engineering from Bilkent University, Ankara, Turkey, and MS & PhD degrees in Electrical and Computer Engineering from University of New Mexico in Albuquerque, NM. His graduate research involved developing signal/image processing and nonuniformity correction algorithms for better multispectral classification with infrared array sensors developed at UNM Center for High Technology Materials. He did his post-doctoral training at UNM Neurology Department BRAIN Imaging Center, and also at Mind Research Network in Albuquerque, where he developed and applied data analysis & classification techniques to functional magnetic resonance imaging data. He worked as Research Scientist at UT Southwestern Medical Center Neuroradiology Department and UT Dallas Center for Vital Longevity, where he analyzed different modalities of medical imaging data such as EEG, PET/CT, SPECT/CT, MRI and fMRI, during these positions. He is currently working on development and application of dynamic multivariate pattern classification, data-mining and machine-learning methods to functional neuroimaging data in order to advance the understanding of how the human brain is functioning and how it is effected by different brain conditions (different stimuli, disease, etc.). He is also working on developing brain mapping, fMRI signal simulation and visualization techniques for improved dynamic analysis and classification of multidimensional neuroimaging data.


Scalable and Efficient Data Management and Analysis for Exascale Computing

Friday, March 4

Dr. Qing Gary Liu

Abstract

With the increasing fidelity and resolution, large-scale scientific applications at Exascale will generate large volumes of data. These data need to be stored, pre-processed, analyzed, and visualized very efficiently, so that the time to gain insights from data can be minimized. Conventional data management strategies are simplistic, and can result in huge performance bottlenecks at Exascale for both data storage and analysis. In this talk, I will discuss scalable and efficient data management strategies that can reduce these bottlenecks for applications running at scale (e.g., 100,000-core). I will present new techniques that reduce I/O interference in a massively parallel and multi-user environment, without forcing operating system or application level changes. I will then present PreData, a new paradigm that can couple simulations and analytics more efficiently by processing data in-memory and in a streaming fashion. In the end I will briefly introduce my work that combines phase identification and statistical modeling to generate compact and high-fidelity benchmarks for performance evaluations on new HPC systems.

Biography

Dr. Gary Liu is a Staff Scientist with the Computer Science and Mathematics Division at Oak Ridge National Laboratory. His research interests include Big Data in data-intensive science, high-performance computing, and high-speed networking. In particular, he has done extensive research on scalable data storage and analysis solutions on emerging architectures for HPC applications. His sole research products have been adopted by more than twenty HPC applications in fusion energy, high-energy physics, cancer research, quantum physics, material science, turbine engine design, weather modeling, and etc., for production purposes. He is currently leading and co-leading a few research projects funded by Department of Energy. Dr. Liu received his Ph.D. in Computer Engineering from University of New Mexico, Albuquerque, NM in 2008, and was awarded outstanding graduate of the ECE Department at UNM. He was the distinguished employee of Computing and Computational Science Directorate at ORNL in 2012. He received R&D 100 award as a principle investigator in 2013 for his contributions to adaptable I/O systems for Big Data applications. Dr. Liu has authored and co-authored technical articles on premier conferences such as ACM SIGMETRICS, HPDC, and SC, and his paper was a best paper finalist in ICCCN’08.


Geometric algorithms for Proximity and Uncertainty

Tuesday, March 1

Dr. Nirman Kumar

Abstract

The modern age is undoubtedly the age of data. Technological inventions have made the process of sensing, acquiring, and storing data easy to an extent that we have "too much of data". Unfortunately, not all of this data is precise -- data imprecision and uncertainty is prevalent with huge amounts of data, geometric or otherwise. In this talk I will focus on two main problems on geometric computing for such big data in low dimensions: (i) A fast data-structure for computing an approximate k-th nearest neighbor in the form of an Approximate Voronoi diagram - the complexity of the data-structure gets better as k increases, and, (ii) for uncertainty modeled in the existential model, i.e., each point p_i in question is known to be active or only exists with a certain probability \alpha_i, and has an associated non-negative value v_i, we show how to compute expected range-max for rectangular query ranges in sublinear time with sub-quadratic storage for any constant dimension. I will also discuss directions for future research.

Biography

Nirman Kumar is a postdoctoral researcher at the University of California, Santa Barbara. His interests are broadly in theoretical computer science, and more specifically in Approximation algorithms and Computational Geometry. Nirman completed his Ph.D. in Computer Science in 2014 at the University of Illinois Urbana-Champaign, advised by Sariel Har-Peled. Prior to UIUC, he worked for companies like Chailabs (acquired by Facebook), Yahoo and Oracle. Even before that he received his M.S. degree in Computer Science from UIUC and his undergraduate (B.Tech) degree in Computer Science and Engineering from the Indian Institute of Technology in Kanpur, India.


GLADE: A Scalable Big Data Analytics System

Thursday, February 25

Dr. Florin Rusu

Abstract

In this talk, we present GLADE, a scalable and efficient Big Data analytics system. GLADE is a multi-node multi-thread parallel system built around the user-defined aggregate (UDA) abstraction. It has a push-based relational columnar storage engine and provides runtime code generation for efficient query processing. Although necessary, these features alone are not sufficient for scalable Big Data analytics which requires more advanced methods, such as approximation and multi-query processing. Parallel online aggregation implemented in GLADE allows for approximate results with confidence bounds to be generated throughout the entire query processing, without increasing the overall execution time. Multi-query processing in GLADE goes beyond standard shared scans and intermediate result caching. Essentially, data access is shared throughout the entire memory hierarchy, from disk to CPU registers, across multiple queries. We provide a concrete example showing how these techniques are integrated into gradient descent optimization, the most popular method for training generalized linear models at terascale.

Biography

Florin Rusu is an assistant professor in the School of Engineering at University of California Merced and a faculty scientist in the Scientific Data Management Group at Lawrence Berkeley National Lab. Florin's research interests lie in the area of databases and large scale data management in general, with a particular focus on designing and building infrastructure for Big Data analytics. Specific topics include approximate query processing, scientific data processing, and scalable analytics. Florin has designed and implemented several data processing systems over the past ten years, including DBO/TurboDBO, DataPath, GLADE, and EXTASCID. He has been an active member of the database research community, publishing articles and serving in the program committees of prestigious conferences and journals such as SIGMOD, VLDB, TODS, and TKDE. Florin is the recipient of a Hellman Faculty Fellowship in 2013 and a DOE Early Career Award in 2014.


Noise enhanced error correction: fighting noise with noise

Wednesday, February 24

Dr. Chris Winstead

Abstract

Error correction decoders are a crucial component of high-performance data retrieval and storage systems. One class of codes, known as Low Density Parity Check (LDPC) codes, has been a subject of research for over fifty years. During the last fifteen years, interest in LDPC codes and decoding algorithms accelerated since they were shown to approach the theoretical Shannon capacity limit while requiring only linear computational complexity. As a result, LDPC codes are now integral to numerous standards in networking, digital telephone services, satellite communications, disk drives and solid-state memories. LDPC decoders are highly specialized digital architectures that must deliver an extremely high rate of calculations in order to meet the throughput requirements of modern data systems. In spite of their linear computational complexity, LDPC decoders represent an efficiency bottleneck for many systems due to their costly arithmetic and high degree of parallelism.

This presentation describes recent advances on highly efficient "bit flipping" algorithms that are enhanced by introducing random noise into arithmetic calculations. One algorithm of particular interest is Gradient Descent Bit Flipping (GDBF). This algorithm is one of the simplest known methods for decoding LDPC codes, requiring a relatively small number of integer additions and comparisons. Due to its simplicity, GDBF was studied as a "toy", and was thought useful only for applications with severely constrained power budgets. By adding noise, however, the algorithm becomes quite powerful and is able to achieve the requirements of major industrial standards. To demonstrate the efficacy of noise-enhanced decoding, an ASIC implementation is presented for a Noisy Gradient Descent Bit-Flipping (NGDBF) decoder applied to the IEEE 802.3an 10GBase-T ethernet standard. The NGDBF decoder is shown to have superior energy efficiency and gate area, while incurring no significant loss in performance, compared to all previously reported 10GBase-T ASICs.

The presentation concludes with a brief discussion of theoretical topics. We discuss the challenge of establishing a theoretical foundation for noise-enhanced decoding heuristics, their incomplete relationship to generalized stochastic optimization methods, and open problems related to the asymptotic performance limits of bit-flipping algorithms.

Biography

Chris Winstead received the B.S. degree in Electrical and Computer Engineering from the University of Utah in 2000, and the Ph.D. degree from the University of Alberta in 2005. He is currently with the ECE Department at Utah State University, where he holds the rank of Associate Professor. Dr. Winstead's research interests include information theory and coding, implementation of error-correction algorithms, low-power electronics and fault-tolerant VLSI circuits. In 2010, Dr. Winstead received the NSF Career award for research in low-energy wireless communication circuits. During 2013-2014, he was a Fulbright Visiting Professor at the Universite de Bretagne Sud (UBS) in Lorient, France. He is also a Senior Member of the IEEE and a member of the Tau Beta Pi engineering honor society.


Tangible Visualization and ICy STEAM: interactive tools for engaging complex computational systems

Friday, February 19

Dr. Brygg Ullmer

Abstract

Scientific and information visualization have long served as powerful vehicles for helping people graphically represent, explore, and understand our universe. Our group investigates and applies tangible visualization. Tangible interfaces, which facilitate interaction with systems of computationally-mediated physical artifacts, are used to represent and engage complex systems. Our interfaces range from handheld to architectural physical scales, and in different editions that span and interweave several interaction paradigms. Our research both develops enabling tools and architectures, and studies and engages specific application domains. In this talk, we will consider several applications engaging ICy STEAM (interactive computational science, technology, engineering, arts, and mathematics), with emphasis on supporting computational comparative genomics and communicating scientific content to diverse audiences.

Biography

Brygg Ullmer is the Effie C. and Donald M. Hardy associate professor at LSU, jointly in the School of Electrical Engineering and Computer Science (EECS) and the Center for Computation and Technology (CCT). He leads CCT's Cultural Computing focus area (research division), with 15 faculty spanning six departments, and co-leads the Tangible Visualization group. He serves as director for the NIH-supported Louisiana Biomedical Research Network (LBRN) Bioinformatics, Biostatistics, and Computational Biology (BBC) Core, in support of 13 statewide campuses. Ullmer completed his Ph.D. at the MIT Media Laboratory (Tangible Media group) in 2002, where his research focused on tangible user interfaces. He has held internships at the Industrial Mathematics Initiative (U. South Carolina),Interval Research (Palo Alto) and Sony CSL (Tokyo); a postdoctoral position in the visualization department of the Zuse Institute Berlin; and has been a visiting and remote lecturer at Hong Kong Polytechnic's School of Design. His research interests include tangible interfaces (and more broadly,human-computer interaction), computational genomics (and more broadly, interactive computational STEAM), visualization, and novel physical and electronic prototyping technologies. He also has a strong interest in computationally-mediated art, craft, and design, rooted in the traditions and material expressions of specific regions and cultures.

Learn more about Dr. Ulmer at https://cc.cct.lsu.edu/groups/tangviz/


Pheno-informatics: A New Framework for Analyzing Phenomics Data

Thursday, February 18

Dr. Jin Chen

Abstract

Nowadays, DNA sequence data are available for many species, but the systematic quantification and analysis of phenotypes remains a big challenge. My research aim is to bridge the genotype-phenotype gap by developing novel data mining techniques. So that multi-omics data can be transformed into testable hypotheses to identify important genes in various aspects. In this talk, I will first introduce our recent progress in phenomics data modeling, including a new inter-functional phenomics clustering method and a new phenotype-environment relationship learning framework. I will illustrate how these tools have allowed us to discover new mechanism of photosynthesis in plants. In the second part, I will discuss our future plan in bioinformatics and data science, and their applications in biomedical research.

Biography

Dr. Jin Chen obtained his PhD in Computer Science from the National University of Singapore, School of Computing in 2007. He did his postdoc training in Carnegie Institution, Stanford from 2007 to 2009. After that, he joined the Michigan State University as Assistant Professor. His research focuses on developing novel data mining, artificial intelligence and computer vision algorithms to solve basic biological problems. With supports from NSF and DOE, his group has developed a dozen of pheno-informatics tools with the aim of solving the world food shortage problem.


Verifiable Privacy-preserving Monitoring for Cloud-assisted mHealth Systems

Friday, February 12

Dr. Linke Guo, Binghamton University, State University of New York

Abstract

Widely deployed mHealth systems enable patients to efficiently collect, aggregate, and report their Personal Health Records (PHRs), and then lower the costs and shorten their response time. The increasing needs of PHR monitoring require the involvement of healthcare companies that provide monitoring programs for analyzing PHRs. Unfortunately, healthcare companies are lack of the computation, storage, and communication capability on supporting millions of patients. To tackle this problem, they seek for the help from the cloud. However, delegating monitoring programs to the cloud may incur serious security and privacy breaches because people have to provide their identity information and PHRs to the public domain. Even worse, the cloud may mistakenly return the incorrect computation results, which will put patients' life in jeopardy. This talk will first go through the security and privacy breaches in current eHealth/mHealth systems. Then, the talk will be focusing on the feasible cryptographic solution to the privacy-preserving monitoring scheme for cloud-assisted mHealth systems.

Biography

Dr. Linke Guo is currently an assistant professor at Department of Electrical and Computer Engineering, Binghamton University, State University of New York (SUNY). His research has been focusing on cybersecurity, privacy-preserving scheme development, and trust and reputation system design, for wired/wireless networks and interdisciplinary systems with emphasis on eHealth/mHealth networks, online/mobile social networks, cloud computing, and location-based services.

Dr. Guo obtained his Ph.D. and M.S. in Electrical and Computer Engineering from University of Florida in 2014 and 2011, respectively. He received his B.E. in Electronic and Information Science and Technology from Beijing University of Posts and Telecommunications (BUPT) in 2008. He was a member in Wireless Networks Laboratory (WINET) at University of Florida. He has been serving as the co-chair of Network Algorithms and Performance Evaluation Symposium, ICNC 2016, and regular TPC member of many conferences, including INFOCOM, Globecom, ICC, WCNC, ICCC, etc. He is also currently serving as the system administrator of IEEE Transactions on Vehicular Technology. He received the Best Paper Award in IEEE GLOBECOM 2015. He is a member of IEEE and ACM.