Friday, December 13, 2013 at 12:00 PM
EBME Host: Dr. Mehdi Etezadi-Amoli
A grid-interactive inverter is an essential component of a grid-tied photovoltaic power system, and it performs numerous tasks including maximizing solar-to-electric power conversion, synchronizing with the local grid, generating quality power, and avoiding islanded operation. With the recent explosive growth in the solar photovoltaic power market, new and additional tasks are being placed on these inverters. The seminar reviews the basic functionalities of a grid-interactive inverter, the different circuit topologies that are currently available in the market along with the advantages they have to offer, and compliance with the current interconnection standard. Test results of some islanding experiments that were conducted on some local PV systems will be presented. Finally, the impact of high PV penetration on system operation will be illustrated - a situation calling for utility system operator control of both active and reactive power generation by grid-tied inverters.
Dr. Yahia Baghzouz received his BS, MS, and PhD degrees in electrical engineering for Louisiana State University, Baton Rouge, LA, in 1981, 1982 and 1986, respectively. After graduation, he worked at the University of Louisiana at Lafayette for one year, and then he joined the Department of Electrical and Computer Engineering, University of Nevada, Las Vegas, NV. He is currently Professor of Electrical Engineering and Associate Director of the Center for Energy Research at UNLV.
Dr. Baghzouz teaches courses in power systems, power electronics, and circuit theory. His area of research includes electric power quality and renewable resource integration. He authored/coauthored over 150 technical articles. He is Senior Member of IEEE and Registered Professional Engineer in the State of Nevada.
Friday, November 15, 2013 at 12:00 PM
Host: Dr. George Bebis
For more than a half-century, humans have been learning to live and work in space. Future human missions to the Moon, Mars, and other destinations offer many new opportunities for exploration. But, astronaut time will always be in short supply, consumables (oxygen, water, etc) will always be limited, and some work will not be feasible (or productive) to be done manually.
Remotely operated robots can complement human explorers. Telerobots can perform work under remote supervision by humans from a space station, spacecraft, habitat, or even from Earth. Telerobots, particularly semi-autonomous systems, can increase the performance and productivity of human space exploration by carrying out work that is routine and highly repetitive.
In this talk, I will present some of the ways in which the NASA Ames Intelligent Robotics Group (IRG) is currently working with remotely operated robots. A central focus of our research has been to develop and test these robots with astronauts on the International Space Station. Our primary objective is to study how remotely operated robots can increase the performance, reduce the costs, and improve the likelihood of success of human space exploration.
Dr. Terry Fong is the Director of the Intelligent Robotics Group at the NASA Ames Research Center. From 2002 to 2004, he was the deputy leader of the Virtual Reality and Active Interfaces Group at the Swiss Federal Institute of Technology (EPFL). From 1997 to 2000, he was Vice President of Development for Fourth Planet, a developer of real-time visualization software. Dr. Fong has published more than a hundred papers in field robotics, human-robot interaction, and robot user interfaces. He received his B.S. and M.S. in Aeronautics and Astronautics from the Massachusetts Institute of Technology and his Ph.D. in Robotics from Carnegie Mellon University.
Friday, November 1, 2013 at 12:00 PM
Lieutenant Colonel Warren L. Rapp is currently the Commander for the 232D Operations Squadron, Creech AFB, Indian Springs, NV. In his position, LtCol Rapp is responsible for all of the Operations, maintenance, Intelligence, and training personnel that are integrated throughout Creech AFB and Nellis AFB. As a conglomerate,these Nevada Air Guard members conduct daily training and combat sorties in Nevada and across the globe.
Lieutenant Colonel Rapp was born in Clovis, New Mexico in 1967, and is currently married to his wife Kim, along with their four children, Jared, Brandon, McKenzie and Alexis. LtCol Rapp started his military career by completing 12 weeks of Officer Candidates School with the United States Marine Corps during the summers of 1987and 1988. He was commissioned a 2LT in the Marines on July 7th 1990, after completing his Bachelor degree in Psychology from Brigham Young University. His military education includes the completion of Flight School-Pensacola Florida, CH-46 helicopter training-Marine Corps Air Station Tustin Ca, Aviation Safety School-Naval Post Graduate School, (Monterey California), C-130 Aircraft Commanders Course-Little Rock AFB, MQ-1 Predator Flight Training-Creech AFB, Air Command and Staff, and is currently enrolled in Air War College.
Highlights in LtCol Rapp's career include being a flight instructor in Navy Flight School-Corpus Christi Texas, C-130 Flight School-Little Rock Ak, Commander of Airport Security Troops at Reno Tahoe IAP (2001-2002), Director of Logistics for the NV Air National Guard, and fully combat trained MQ-1 Pilot. LtCol Rapp is also appointed on Governor Sandoval's committee for economic development for Nevada, acting as a military UAS advisor to Northern Nevada communities and businesses. He currently is the Commander of the 232nd Operations Squadron, located at Creech AFB.
Significant decorations received by LtCol Rapp are the Meritorious Service Medal, Aerial Achievement Medal, Air Force Commendation Medal, Army Commendation Medal, Air Force Outstanding Unit Award, National Defense Service Medal, Patriot Award, Southwest Asia Service Medal, Global War on Terrorism Medal, Armed Forces Reserve Medal, and the Kuwait Liberation Medal (Saudi Arabia and Government of Kuwait).
LtCol Rapp has been actively involved in the community since he was a child growing up primarily in Reno, NV. Over the years he has coached numerous youth soccer teams, participated as a leader in the Boy Scouts of America, and organized and executed several Air Guard Community Blood Drives. Before moving to Las Vegas in 2008, LtCol Rapp served as the President of the Military Officers Association of America for the Reno, NV chapter. Currently he continues to coach competitive youth soccer in Las Vegas, and is an active member of National Guard Association of the United States, as well as being an active member in his local Church.
Friday, October 18, 2013 at 12:00 PM
The Cancer Genome Atlas (TCGA) is a rapidly expanding resource that is accelerating discovery in cancer by providing the research community with minable genomics and clinical outcome data. Recently pathology images, from H&E stained samples, have been added to complement the molecular and clinical data. However, utilization of whole slide images is substantially hindered by the batch effects, biological heterogeneity, and tumor composition. A computational pipeline is presented that overcome these complexities for revealing intrinsic subtypes from morphometric signature of a cohort of 250 GBM patients. Subsequently, molecular correlates of each subtype are constructed for targeted therapy. In addition to computed morphometric subtypes, tumor heterogeneity is also computed to evaluate if heterogeneity is more virulent in predicting the outcome.
Dr. Bahram Parvin is a principal scientist at the Lawrence Berkeley National Laboratory and has an adjunct appointment with the EE Department at the U.C. His laboratory focuses on technology development for realization of pathway pathology, elucidating molecular signature of aberrant morphogenesis in engineered matrices, and screening for probes for labeling and cargo delivery. He has published over 100 papers and was the General Chair of the IEEE Int. Symp. On Biomedical Imaging: from nano to macro in 2013. He is also an Associate Editor for IEEE Transactions on Medical Imaging, a member of the steering committee for IEEE Bioimaging and Signal Processing and IEEE Bioengineering and Health Care.
Friday, September 27, 2013 at 12:00 PM
Rex Briggs has been helping Fortune 500 marketers improve marketing Return On Investment (ROI) by applying analytics for more than two decades. Rex is a leading expert in unlocking marketing ROI profits through measurement. Rex is credited with pioneering many digital measurement techniques, including post-click analysis, attribution modeling, online advertising effectiveness, Cross Media Measurement, Social Media effectiveness, and the integration of marketing mix modeling with attitudinal measurement.
Rex's ROI work is referenced in over 100 marketing books, and his own books, What Sticks: Why Most Advertising Fails and How to Guarantee Yours Succeeds (2006) and SIRFs-Up: The Story of How "Spend To Impact Response Functions" (SIRFs), Algorithms and Software Are Changing The Face of Marketing (2012), have been required reading at top business schools including Wharton and Harvard. Both of his books have made news on the cover of Ad Age for its groundbreaking insights on how to reduce waste and improve marketing ROI.
Rex's company, Marketing Evolution, was founded in 2000 operates in over 20 countries, serving companies including AB InBev, Best Buy, Coca-Cola, Cox, Citi Bank (to name a few) and has been named the fastest growing ROI Company in America by INC. Magazine for the past two years. Rex is a sought after corporate presenter and guest lecturer for top business schools because he uses his deep expertise in ROI measurement to simplify and focus the message on the practical ways to integrate ROI into the marketing organization for competitive advantage.
Friday, September 20, 2013 at 12:00 PM
Human decision-makers play a major role in the operation of most real-world systems of today. In most cases, the successful operation of these systems often hinge upon the sound judgment of few individuals. For example, pilots and air traffic controllers continuously make decisions that determine the safety and operation of the National Airspace System (NAS). Even if replacing the humans with automation is conceivable, it will be many decades before the dependence on human decision-making becomes negligible. Since humans play such a crucial role in characterizing real-world systems, it follows that to make any accurate predictions about system behavior requires a model that is capable of capturing both the human and non-human dynamics of the system. In this talk, I am going to present a game theoric framework to predict the evolution of complex systems with human elements. I will show how this framework is used to predict human decisions in midair aircraft conflicts, aircraft merging and landing and cyber-attacks on smart grids.
Yildiray Yildiz is an associate scientist at NASA Ames Research Center, employed by U.C. Santa Cruz. He received his B Sc. degree from Middle East Technical University in 2002, M Sc. degree from Sabanci University in 2004 and Ph.D. degree from Massachusetts Institute of Technology in 2009. After completing his Ph.D., Yildiz joined NASA Ames Research Center as a postdoctoral associate and employed by University of California, Santa Cruz. In 2010, he became an associate scientist at the same institution. His research interests lie at the intersection of control theory and applications to aerospace and automotive systems. Dr. Yildiz is the recipient of a best student paper award and a NASA Group Achievement Award "for outstanding technology development of the CAPIO system at the Vertical Motion Simulator supporting NASA's Green Aviation Initiative." He has been in program committees for several conferences and is a reviewer for several journals. He was a member of the AIAA Guidance, Navigation and Control Technical Committee from 2010 through 2013. His research is supported by Ford Motor Company and NASA Ames Research Center Innovation Funds.
Friday, April 26, 2013 at 12:00 PM
Host: Dr Sushil Louis
Meet your CSE alumni, ask them about jobs, find out what it is like to work for a startup, to start a startup, to work for a large company or to work locally.
Jeff Chao is the Engineering Lead at Post+Beam in San Francisco. Prior to Post+Beam, Jeff was the 5th employee at Mobsmith where he was one of the main contributors to the mobile ad creation tool and rendering engine which helped lead to the company's acquisition by Rubicon. Since graduation, Jeff has shipped code in highly volatile environments at scale, hired and managed a team, and architected products from the ground-up.
Eric Jennings is co-founder of Pinoccio, an open, wireless hardware platform for makers. He is a TechStars NYC Winter 2011 alum, has worked for tech startups in New York City, San Francisco, and Los Angeles, and holds a CS bachelor's degree from University of Nevada, Reno. He's In love with the connection between software, hardware, people, and the environment in which they live, but has a crush on Erlang, analog synthesizers, and permaculture
Ben Lucchesi is Chief Software Architect at Granicus and directs the strategic development vision on Granicus' legislative management platforms. Ben has several years of experience in building robust, interactive web and client-server applications. Prior to joining Granicus, Ben was the e-Design Manager at IQ Systems, where he designed and developed custom business solutions for electronic commerce and inventory management applications.
Chris Miles is a senior prototype engineer and UX designer at Microsoft, and has spent the last 3 years making the next big things, including the Kinect and a bunch of secret other projects he doesn't dare mention in writing. Before that he spent several years in the gaming industry working on venerable franchises at EA-Maxis and LucasArts. Chris received both his undergraduate and graduate degrees here at University of Nevada, Reno, culminating in a Ph.D. in Computer Science and Engineering in 2007.
Saam Talaie received his Bachelors Degree in Computer Science from University of Nevada, Reno in 2004 and went on to complete a Masters at UCLA. He started his post-college career at NetSeer, a startup focusing on natural language processing and machine learning for web content-advertisement optimization. For the last 2 years, Saam has been at Apple, working first on the backend for Apple's retail POS system, EasyPay touch, and now working on machine learning.
Hector Urtubia came to the US from Chile in 1997 and did his undergrad in Computer Science at the University of Nevada, Reno in 1998. Hector now works at PC-Doctor Inc., a Hardware Diagnostics company based in Reno. He is now a Senior Software Engineer and Team Lead for the UI and Data teams. Apart from a strong passion for Programming and Software Engineering, Hector loves spending time with his family, playing music, building DIY hardware, biking, traveling and learning new technologies.
Friday, April 29, 2013 at 12:00 PM
Host: Dr Mehmet Gunes
The Domain Name System (DNS) is one of the fundamental components of Internet functionality. The ability to reliably translate domain names to resources, such as Internet Protocol (IP) addresses, is critical to Internet use. The DNS Security Extensions (DNSSEC) add authentication to the DNS, allowing responses to be cryptographically validated. However, DNSSEC deployment and maintenance complexity is non-trivial and has proven to be a challenge for early adopters. Many deployments have suffered from DNSSEC misconfiguration, resulting in inaccessibility of their resources. We discuss approaches to DNSSEC monitoring and surveillance, including passive and active measurement of the DNS, and demonstrate an online DNS visualization tool designed to assist administrators in identifying critical issues with their DNSSEC deployments.
Casey Deccio is a Principal Research and Development Cybersecurity Staff Member Staff at Sandia National Laboratories in Livermore, CA. He joined Sandia in 2004 after receiving his BS and MS degrees in Computer Science from Brigham Young University, and he received his PhD in Computer Science from the University of California Davis in 2010.
Casey's research interests lie primarily in network measurement, including DNS(SEC) and IPv6.
Friday, April 12, 2013 at 12:00 PM
Host: Dr. Fred Harris
The Reno tech community is currently undergoing a very exciting transformation. The tools we have at our disposal as software developers have made it easier than ever to create a new tech product/company. However, while it might be easy to start a tech company today, building it beyond the initial prototype phase is actually harder than ever. The risks are high but the opportunities are even greater. Having a strong technology and startup community in Reno will help to improve the chances of success we see in local startups. Surrounding startups with other developers, designers, and forward thinking people allows everyone to learn from one another. This allows for events to celebrate individual and collective successes and moves Reno forward as a potential hub for technical and creative talent.
Colin is the co-founder of Cloudsnap, an integration platform for developers. Cloudsnap was one of the 11 companies selected for the TechStars Cloud startup accelerator in 2012. He is the founder of Reno Collective Coworking, a local coworking space that is home to sixty creative and tech professionals including software developers, designers, artists, photographers, and much more. As the co-organizer of Hack4Reno, the Code for America Reno Brigade, and Ignite Reno, he is a strong advocate for strengthening the technology and startup scene in Reno.
We will discuss the current startup scene in Reno, the pros and cons of choosing a startup career, and the opportunities available to get involved and further your technical chops.
Friday, April 05, 2013 at 12:00 PM
Host: Dr. Mehmet Gunes
Named Data Networking (NDN) is a new Internet architecture focused on content rather than host addresses. In NDN the network uses content name prefixes to route data. The narrow waist of the hourglass design expands from IP to general names, which can be anything, hosts, content, services, etc. One implication is to change the network role from "locate this host" to "locate this content".
I will present the motivation architecture behind NDN, along with some current work done by the NDN team. I will also discuss the current NDN testbed, and our work on the challenges in developing network management tools.
More information can be found at www.named-data.net.
Christos Papadopoulos is currently an associate professor at Colorado State University. He received his Ph.D. in Computer Science in 1999 from Washington University in St. Louis, MO. His interests include network security, router services, and multimedia. In 2002 he received an NSF CAREER award to explore router services as a component of the next generation Internet architecture. His current interests include network security and measurements. Current projects include PREDICT, a DHS-funded project that makes security data available to researchers, and NDN: Named Data Networking, an NSF funded project looking at future Internet architectures.
Friday, March 29, 2013 at 12:00 PM
Host: Dr. Mircea Nicolescu
Robot learning and planning are difficult tasks, because learning space is typically too large to be effectively explored by learning algorithms (due to the large amount of uncertainties in the real world) and automatic planning usually generates brittle plans whose execution is not robust in real environments (due to the impossibility of modeling all the details needed for actual accomplishment of complex tasks in a real environment). The main idea behind our research is that a suitable combination of planning and learning techniques can provide significant improvements with respect to the use of each single method. More specifically, in this talk we will present a method for generating and learning agent controllers, which combines techniques from automated planning and reinforcement learning. An incomplete description of the domain is first used to generate a non-deterministic automaton able to act (sub-optimally) in the given environment. Such a controller is then refined through experience, by learning choices at non-deterministic points. The proposed method exploits incompleteness of the model and experience in real execution in order to face the unavoidable discrepancies between the model and the environment. Implementation and results on different robot and multi-robot systems will be also shown.
Luca Iocchi is Associate Professor at Sapienza University of Rome, Italy. His research activity is focused on methodological, theoretical and practical aspects of artificial intelligence, with applications related to cognitive mobile robots and computer vision systems operating in real environments. His main research interests include cognitive robotics, action planning, multi-robot coordination, robot perception, robot learning, sensor data fusion.
He is author of more than 100 referred papers (h-index 26) in journals and conferences in artificial intelligence and robotics, member of the program committee of several related conferences, guest editor for journal special issues and reviewer for many journals in the field. He has coordinated national and international projects and, in particular, he has supervised the development of (teams of) mobile robots and vision systems with cognitive capabilities for applications in dynamic environments, such as RoboCup soccer, RoboCup rescue, RoboCup@Home, multi-robot surveillance, and automatic video-surveillance. He also contributed to benchmarking domestic service robots through scientific competitions within RoboCup@Home, of which he is member of the Executive Committee since 2008
Thursday, March 14, 2013 at 12:00 PM
Host: Dr. George Bebis
With the recent proliferation of spectrum-dependent operations such as cellular communication, public safety, military tactical networks, wireless LANs etc., the wireless industry is experiencing a fast paradigm shift from static spectrum allocation to opportunistic dynamic spectrum access (DSA) and cognitive radio (CR) network based on DSA has become one of the prime foci in wireless networking. A number of international standardization (e.g., IEEE 802.22, P1900) efforts have already begun in this area. The move to digital T.V. transmission in June 2009 is expected to enable the first generation of DSA capable wireless networks in the current decade.
Though DSA allows unused licensed bands to be used by unlicensed (secondary) networks in an opportunistic manner under the provision that they would vacate upon the return of the licensed users, the paradigm "does not provide any protection from interference" in the open access DSA model among the greedy secondary networks. The opportunistic and network-aware real-time DSA nature of the system introduces entirely new classes of spectrum access challenges and security threats in the newly proposed paradigm. Uncertainties in licensed user detection due to unknown signal characteristics, unreliable wireless medium and lack of common control channel coupled with presence of malicious agents make the cognitive radio network and the spectrum sensing highly vulnerable to various unintentional and intentional disruption threats in the hostile network environment. Unfortunately, due to the unique DSA-based communication paradigm, the traditional techniques fail to incorporate the emerging sustenance issues in the CR networks and there is little understanding on how a CR network will operate so as to make the system feasible under such threats.
In this research, we study the vulnerability challenges of cognitive radio networks under adversarial conditions and investigate mechanisms that aid survivability and self-coexistence of these networks. Since cognitive networks act in an autonomous, rational and intelligent manner through their sensing, learning, and adaptation capabilities just like the human societies, we ask: how models from human societies can be used to mitigate vulnerabilities and maintain self-coexistence among legitimate CR networks? To address the challenges, we advance ideas from game theory, behavioral model, network forensics and cognitive radio to optimize decisions under uncertainty. To comprehensively assess the effectiveness of the proposed mechanisms in realistic wireless environments, we have also designed and implemented SpiderRadio, a software defined cognitive radio testbed prototype for dynamic spectrum access networking.
Dr. Shamik Sengupta is an Assistant Professor in the Department of Mathematics and Computer Science, John Jay College of Criminal Justice of the City University of New York. Prof. Sengupta received his Ph.D. degree from the School of Electrical Engineering and Computer Science, University of Central Florida, Orlando in 2007. His research in wireless networking, cognitive radio networks, cybersecurity, covert networking and inter-disciplinary studies has been funded by the National Science Foundation (NSF), National Institute of Justice (NIJ), NY GRTI and PSC-CUNY. Prof. Sengupta served as the Vice-Chair of Mobile Wireless Network (MobIG) special interest group of the IEEE COMSOC Multimedia Communications Technical Committee. He is in the organizing and technical program committee of various ACM/IEEE conferences and also serving editorial assignments in journals. Prof. Sengupta is the recipient of IEEE Globecom 2008 best paper award. He is also the recipient of NSF CAREER Award 2012
Monday, March 11, 2013 at 12:00 PM
Host: Dr. Yaakov Varol
Rational decision making in multi-agent systems where each intelligent agent encounters uncertainty and local/noisy observations, is a NEXP-complete problem. It is not surprising that applications of large multi-agent systems such as computer games and multi-agent based simulations, as well as multi/swarm robotics have not yet been able to tap into game/decision theoretically optimal solutions. My medium-term research goal is to transform these application areas by employing approximately optimal solutions to the decision theoretic problem, in lieu of the current practice of employing largely hand-designed solutions with inadequate quality (extent of suboptimality) guarantees. I will present the recent work of my research group at USM on this objective, where approximately optimal solutions are learned by the agents themselves by distributed reinforcement learning, and demonstrate results in small 2-robot problems. If time permits, I will talk about how this approach can be extended to larger multi-agent systems by means of transfer learning and ideas from the game industry.
Bikramjit Banerjee received a Ph.D. in Computer Science from Tulane University in 2006, with a graduate research excellence award. He spent a year in UT-Austin as a postdoc, and then a year as an assistant professor at DigiPen Institute of Technology (a school for game developers in Redmond, WA), before joining The University of Southern Mississippi as a tenure-stream faculty. He has co-authored over 50 papers and a patent in the areas of multi-agent systems and machine learning, attracted nearly $1.75 million of federal research funding as co-PI or PI from DHS, DoD and NASA, and has directed doctoral, master's and undergraduate honor theses. He serves the university and broader research communities as panelist, associate editor, reviewer, and committee member, most recently serving on the senior program committee of the 23rd International Joint Conference on AI (IJCAI-2013), the most influential conference in AI.
Friday, March 08 2013 at 12:00 PM
Host: Dr. Fred Harris
OCR systems are widely used in projects like Google Books and the Internet Archive. Even though OCR is often thought of as a "solved problem" due to nominally low error rates, fully automatic conversion of printed text into digital form remains elusive. I will give a general overview of OCR and OCR systems, and then describe and contrast two approaches to the core of the OCR problem: a carefully designed text recognizer based on segmentation using optimal cuts, a scalable local classifier, and language modeling using weighted finite state transducers; and a second recognizer based on recurrent neural networks. I will provide evaluations, benchmarks, and examples of the performance of both recognizers, and discuss implications for the design of classifiers and pattern recognition systems in general.
Thomas Breuel is professor of computer science at the Technical University of Kaiserslautern Computer Science Department and head of the Image Understanding and Pattern Recognition (IUPR) research group. His research group works in the areas of image understanding, document imaging, computer vision, and pattern recognition. Previously, he was a researcher at Xerox PARC, the IBM Almaden Research Center, IDIAP, Switzerland, as well as a consultant to the US Bureau of the Census. He is an alumnus of MIT and Harvard University.
Friday, March 08, 2013 at 11:00 AM
Host: Dr. Eelke Folmer
The field of Natural User Interaction (NUI) focuses on allowing users to interact with technology through the range of human abilities, such as touch, voice, vision and motion. Children are still developing their cognitive and physical capabilities, creating unique design challenges and opportunities for interacting in these modalities. This talk will describe Lisa Anthony's research over the past decade in (a) understanding children's expectations and abilities with respect to NUIs and (b) designing and developing new multimodal NUIs for children in a variety of contexts. Examples of projects she will present are her NSF-funded project on understanding how children use touch and gesture interaction on mobile devices, and her dissertation work on designing natural interactions for children in educational contexts. She will also present plans for expanding this work over the next five to ten years.
Lisa Anthony is presently a post-doctoral research associate in the Information Systems Department at the University of Maryland Baltimore County (UMBC). She holds an MS in Computer Science (Drexel University, 2002), and a PhD in Human-Computer Interaction (Carnegie Mellon University, 2008). After her PhD, Lisa spent two years in a research and development laboratory working on DARPA- and ONR-funded user-centered interface projects. Her current research interests include understanding how children can make use of advanced interaction techniques and how to develop technology to support them in variety of contexts, including education, healthcare and serious games. Her PhD dissertation investigated the use of handwriting input for middle school math tutoring software, and her simple and accurate multistroke gesture recognizers called $N and $P are well-known in the field of interactive surface gesture recognition.
Thursday, March 07, 2013 at 11:00 AM
Host: Dr. Mehmet Gunes
Crowdsensing is a new paradigm which takes advantage of the pervasive smartphones to collect and analyze data beyond the scale of what was previously possible. Appropriate incentives are necessary to compensate smartphone users for the resource consumption while participating in crowdsensing. In this talk, I will discuss how to design incentive mechanisms for two crowdsensing models: the platform-centric model, where the platform provides a reward shared by participating users, and the user-centric model, where users have more control over the payment they will receive. For the platform-centric model, I will present an incentive mechanism using a Stackelberg game, where the platform is the leader while the users are the followers. I will show how the platform can predict the users' sensing time and therefore maximize its own utility. For the user-centric model, I will present an auction-based incentive mechanism, which is computationally efficient, individually rational, profitable, and truthful.
Dejun Yang is a PhD candidate in Computer Science at Arizona State University. He received his BS degree in Computer Science from Peking University, China. His research interests lie broadly in the areas of big data, cloud computing, crowdsourcing, network security and privacy, smart grid, and wireless networks. Much of his work focuses on resource allocation in networks, smart grid, and cloud computing, incentive mechanism design, and network security and privacy. He has published in journals including IEEE/ACM Transactions on Networking, IEEE Journal on Selected Areas in Communications, IEEE Transactions on Mobile Computing, and IEEE Transactions on Smart Grid, as well as conferences including ACM MobiCom, ACM MobiHoc, IEEE INFOCOM, IEEE ICNP, IEEE MASS, IEEE SECON, IEEE ICC and IEEE Globecom. He has received Best Paper Awards at ICC'2012, ICC'2011, and MASS'2011, as well as a Best Paper Runner-Up at ICNP'2010.
Monday, March 04, 2013 at 11:00 AM
Host: Nancy LaTourrette
Cloud computing is becoming a dominant computing paradigm. However, most cloud computing services are built using commodity systems not designed to handle the variety of threats present in this utility-like computing model. Users' concerns and surveys of hypervisor vulnerabilities have motivated our research on securing virtual machines, in particular we focus on protections from a malicious or compromised hypervisor. We have defined hypervisor-free virtualization, realized in the NoHype architecture, which aims to eliminate the need for active hypervisor when the virtual machines run. Our key insight is to use hardware virtualization features, originally designed for performance reasons, to remove the hypervisor attack surface and securely isolate the virtual machines. We also defined hypervisor-secure virtualization, realized in the HyperWall architecture, which further improves virtual machine security while providing more functionality over NoHype. The HyperWall architecture allows an untrusted commodity hypervisor to manage the system while the virtual machines are protected from it. Our key contribution is a special new feature we introduced: the hardware-only accessible DRAM for storing the protections. To improve confidence in the security of the design, we recently proposed a novel security verification methodology, and applied it to component interactions and protocols of HyperWall. By designing and verifying such architectures for secure cloud computing, we can enable more users to enjoy the benefits of cloud computing and be able to securely process sensitive code and data in virtual machines running on cloud servers - even if attackers can gain hypervisor-level privileges.
Jakub Szefer's research interests are at the intersection of computer architecture and computer security. His recent work focuses on securing cloud computing, even if the hypervisor running on the cloud servers is compromised. He received B.S. degree with highest honors in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign in 2006, a M.A. in Electrical Engineering rom Princeton University in 2009, and expects his Ph.D. also in Electrical Engineering from Princeton University in early 2013. He is part of the Princeton Architectural Lab for Multimedia and Security (PALMS) led by Prof. Ruby B. Lee. In addition to research, he enjoys teaching and has won two outstanding TA awards and the Wu Prize for Excellence.
Friday, March 01, 2013 at 11:00 AM
Host: Dr. Mehmet Gunes
Coverage is one of the fundamental concepts in the design of wireless sensor networks (WSNs) in the sense that the monitoring quality of a phenomenon depends on the quality of service provided by the sensors in terms of how well a field of interest is covered. Several applications require k-coverage, where each point in the field is covered by at least k sensors, which helps increase data availability to ensure better data reliability. Achieving k-coverage of a field is a challenging issue in sparsely deployed WSNs. In this talk, we investigate the problem of k-coverage in sparse WSNs using static and mobile sensors, which do not necessarily have the same features (i.e., communication, sensing, and power). Precisely, we propose an optimized framework for k-coverage in sparsely deployed WSNs, which exploits sensor heterogeneity and mobility. First, we characterize k-coverage based on Helly's Theorem. Second, we introduce our energy-efficient four-tier architecture to achieve mobile k-coverage of a region of interest in a field. Third, we suggest two data gathering protocols, called direct data gathering and forwarding chain-based data gathering, using the concept of mobile proxy sink. For energy-efficient forwarding, we compute the minimum transmission distance between any pair of consecutive mobile proxy sinks forming the forwarding chain as well as the corresponding optimum number of mobile proxy sinks in this chain. We corroborate our analysis with several simulation results.
Habib M. Ammari is an Associate Professor and the Founding Director of Wireless Sensor and Mobile Ad-hoc Networks (WiSeMAN) Research Lab, in the Department of Computer and Information Science at the University of Michigan-Dearborn, since September 2011. He obtained his second Ph.D. degree in Computer Science and Engineering from the University of Texas at Arlington, in May 2008, and his first Ph.D. in Computer Science from the Faculty of Sciences of Tunis, in December 1996. He published "Challenges and Opportunities of Connected k-Covered Wireless Sensor Networks: From Sensor Deployment to Data Gathering," book in August 2009. He received several prestigious awards, including the Certificate of Appreciation Award at the ACM MobiCom 2011, the Outstanding Leadership Award at the IEEE ICCCN 2011, the Best Symposium Award at the IEEE IWCMC 2011, the Lawrence A. Stessin Prize for Outstanding Scholarly Publication from Hofstra University in May 2010, the Faculty Research and Development Grant Award from Hofstra College of Liberal Arts and Sciences in May 2009, the Best Paper Award at EWSN in 2008, and the Best Paper Award at the IEEE PerCom 2008 Google Ph.D. Forum. He is the recipient of the Nortel Outstanding CSE Doctoral Dissertation Award in February 2009, and the John Steven Schuchman Award for 2006-2007 Outstanding Research by a PhD Student in February 2008. He received a three-year US National Science Foundation (NSF) Research Grant Award, in June 2009, and the US NSF CAREER Award, in January 2011. He serves as Associate Editor of several international journals, including ACM TOSN and IEEE TC. Also, he has served as Program Chair of numerous IEEE and ACM conferences, symposia, and workshops.
Wednesday, February 27, 2013 at 11:00 AM
Host: Dr. Monica Nicolescu
There exists a great untapped potential for the use of intelligent robots as therapeutic social partners. However, enabling robots to understand social behavior, and to do so while interacting with users, is a challenging problem. This argues for data-driven methods that capture the relevant range of interactions. This research addresses the challenge of designing data-driven behaviors for socially assistive robots (SAR) in order to enable them to recognize and appropriately respond to a child's free-form behavior in unstructured play contexts.
This research presents a data-driven methodology for enabling fully autonomous robots to interact with users in eldercare and education domains. Autonomous robot operation is a critical aspect of the methodology; save for safety interventions by a human operator, the robot acts of its own accord. This talk will include:
Dave is currently a Postdoctoral Associate at Yale University at the Social Robotics Lab under the direction of Prof. Brian Scassellati. The focus of his research is to use probabilistic models of social behavior and develop methodologies for applying these models as part of autonomous robot systems. The understanding gained from the study of social behavior is necessary for enabling technology for the increasingly complex interaction scenarios which characterize the embodied and social world.
Dave graduated from the University of Rochester in 2003 with a B.S. in Computer Science. While there he was a founding member of the Undergraduate Robot Research Team, winner of the 2002 AAAI Mobile Host Competition. Dave received his M.S. in Computer Science in 2007 and his Ph.D. in Computer Science in 2011 from the Viterbi School of Engineering (VSoE) at the University of Southern California. He worked in the Interaction Lab where his advisor was Prof. Maja Matarić.
He was awarded a National Science Foundation Computing Innovation Fellowship to support his work. He has also won the Viterbi School of Engineering Best Dissertation Award, the Mellon Mentoring Award, the Order of Arete, and the George Bekey Service award.
Wednesday, February 13, 2013 at 11:00 AM
Host: Dr. Sergiu Dascalu
Domain-Specific Modeling (DSM) has promoted the importance of using higher abstractions to address the needs of end-users who are domain experts. Yet, much of the research in the area requires end-users to know details about metamodels, model transformation languages, and other peculiarities that are sometimes incongruent with the goals of DSM as they relate to removing accidental complexities from the end-user. This talk will highlight some of the discrepancies in the goals of supporting domain experts compared to the current tooling and state-of-the-art in DSM practice. The talk will survey some of the presenter's recent work in using demonstration-based approaches to address some of the needs for improving support of end-user development. A goal of this talk is to raise awareness and highlight the need to involve end-users in the modeling process through various "By Demonstration" approaches.
Jeff Gray is an Associate Professor in the Department of Computer Science at the University of Alabama. He received a Ph.D. from Vanderbilt University and BS/MS from West Virginia University, all in Computer Science. Jeff's research interests include model-driven engineering, aspect-oriented software development, software evolution, mobile computing, and topics in Computer Science Education. He has recently published on these topics in IEEE Software, Communications of the ACM and IEEE Computer. Jeff's work has been supported by Google, IBM, DARPA, US Air Force, Department of Education, and NSF (including a 2007 NSF CAREER award). In Fall 2008, he was named the Alabama Professor of the Year by the Carnegie Foundation. More information about his work can be found at http://gray.cs.ua.edu
Tuesday, February 12, 2013 at 11:00 AM
Host: Dr. Fred Harris
Agile methodology is an approach to the project management which helps to respond to the unpredictability of building software through incremental, iterative work cadences, known as sprints. This methodology was developed to deal with situation where the waterfall model fails. The biggest drawback of waterfall model is that it assumes that every requirement of the project can be identified before any design or coding occurs. This may always be applicable for the development of a automobile on an assembly line, in which each piece is added in sequential phases. However it may or may not be applicable for software development. For example, for a BAU (Business as Usual) projects where the software is already in use for a long time, waterfall model is the best method to implement any changes requests because amount of uncertainly is very less compared to developing a new product. However for developing brand new software, waterfall model is not an ideal choice as the amount of uncertainty in terms of requirement and user quality expectation. It might be the case that end product is not exactly what user has expected due to mismatch of requirement understanding between user and developer. It might also be the case that a team might have built the software it was asked to build, but, in the time it took to create, business realities have changed so dramatically that the product is irrelevant. In that scenario, a company has spent time and money to create software that no one wants.
Agile development methodology provides the opportunity to assess the direction of a project throughout the development lifecycle. It does it through an iterative cycle to build and test followed by an assessment by the user/business until they are satisfied with the product. Thus by focusing on the repetition of abbreviated work cycles as well as the functional product they yield, agile methodology could be described as iterative and incremental.
Agile methods should not be confused with the Spiral method in any case. Spiral method forces you to plan for all the iteration in the beginning whereas Agile gives you the flexibility to plan only which you surely know and leaves the rest of planning for next iteration. In Spiral methodology, since you have planned for all the iteration in the beginning, the numbers of iterations are fixed. But in Agile, you can have as many iterations required to produce the final product as the planning is dynamic.
Roy is a Sr. QA Engineer and a Test Architect at GE Energy, Bently Nevada, in Minden. He received his BS in Computer Science from the University of Nevada, Reno 2001. Roy started his career at Bently as a test technician. He then became a Test Engineer where he spent 8 years in Manufacturing and subsequently moved to Engineering to do Test Automation for engineering. This is where he was exposed to Agile. He led the system testing effort on new product introduction team that used Agile to create their products. He is now the Test Architect for engineering where he has been using the Agile process for the last five years.
Friday, February 08, 2013 at 12:00 AM
Host: Dr. George Bebis
New computer methods have been used to shed light on a number of recent controversies in the study of art. For example, computer fractal analysis has been used in authentication studies of paintings attributed to Jackson Pollock recently discovered by Alex Matter. Computer wavelet analysis has been used for attribution of the contributors in Perugino's Holy Family. An international group of computer and image scientists is studying the brushstrokes in paintings by van Gogh for detecting forgeries. Sophisticated computer analysis of perspective, shading, color and form has shed light on David Hockney's bold claim that as early as 1420, Renaissance artists employed optical devices such as concave mirrors to project images onto their canvases.
How do these computer methods work? What can computers reveal about images that even the best-trained connoisseurs, art historians and artist cannot? How much more powerful and revealing will these methods become? In short, how is computer image analysis changing our understanding of art?
This profusely illustrate lecture for non-scientists will include works by Jackson Pollock, Vincent van Gogh, Jan van Eyck, Hans Memling, Lorenzo Lotto, and others. You may never see paintings the same way again.
Dr. David G. Stork, Distinguished Research Scientist and Research Director at Rambus Labs, is a graduate in physics of the Massachusetts Institute of Technology and the University of Maryland at College Park. He studied art history at Wellesley College, was Artist-in-Residence through the New York State Council of the Arts and is a Fellow of the International Association for Pattern Recognition and a Fellow of SPIE, in part for his work on computer image analysis of art. He has published eight books/proceedings volumes and has one forthcoming, including Seeing the Light: Optics in nature, photography, color, vision and holography (Wiley), Computer image analysis in the study of art (SPIE), Pattern Classification (2nd ed., Wiley), and HAL's Legacy: 2001's computer as dream and reality (MIT).
Thursday, February 07, 2013 at 11:00 AM
Host: Drs. Yantao Shen and Xiaoshan Zhu
Raman and surface-enhanced Raman scattering (SERS) imaging can provide molecular information via inelastic light scattering without physical contact. However, existing instruments are severely limited in imaging throughput and flexibility, which can be addressed by our approach based on programmable active illumination described in the first part of the talk.
Next, I will discuss our development of monolithic, hierarchical plasmonic nanoarchitectures using combined top-down and bottom-up fabrication techniques. Specifically, we are interested in nanoporous gold nanofilm (NPG NF) fabricated by vacuum deposition and dealloying, and its derivatives such as lithographically patterned NPG nanodisk (NPG ND). While NPG NFs are already decent SERS substrates with large effective surface area and high robustness, we have observed ~500 fold increase in SERS enhancement factors (EF) in NPG NDs compared to NPG NFs. The average SERS EF from individual NPG NDs exceeds 200 million.
The synergy of novel microscopy instrumentation and nanostructured substrate development could enable new amplification-free applications in plasmonic molecular biosensing with higher sensitivity and specificity.
Dr. Wei-Chuan Shih is an Assistant Professor of Electrical & Computer Engineering with a joint appointment in Biomedical Engineering at University of Houston. He obtained BS from National Taiwan University, MS from National Chiao Tung University, and Ph.D from Massachusetts Institute of Technology under late Professor Michael Feld. He worked in Schlumberger-Doll Research before his current position. He is a recipient of National Science Foundation CAREER Award (CBET-Biophotonics), and a recipient among a total of 10 in NASA's inaugural Early Career Faculty Award. He is an awardee of Gulf of Mexico Research Initiative set up for the BP Deep Horizon oil spills. He was a MIT Martin Fellow for sustainability and lifetime member of Phi-Tau-Phi Scholastic Honor Society. His multidisciplinary research has been featured in journals such as Nanoscale, Optics Letters, Optics Express, Analytical Chemistry, Journal of Biomedical Optics, Applied Spectroscopy and IEEE/ASME Journal of MEMS. His current research interests are in nanobiophotonics, plasmonics, hyperspectral microscopy and imaging, and N/MEMS. He is a member of OSA, IEEE, SPIE, SAS and ASME.