Energy Efficient Datacenters

Scalable EDA-GP

The goal of the Human Data Interaction is to develop methods that are the intersection of data science, machine learning, and large scale interactive systems, to answer a rather very simple question:ProjectWhy does it take a long time to process, analyze and derive insights from the data? Answer to this question lies in developing technology and a new cadre of methodologies that are based on intricately understanding the complexities in how humans (scientists, researchers, analysts, sales folks, marketing folks and everyone of us) interact with data to analyze, interpret, and derive insights from it. As we have ventured into multiple domains, and multiple applications, we observe that the processes involved in generating insights from data can be organized into five steps: organize, pre process, understand, learn models, generate insights, disseminate. |

We are developing a variety of machine learning approaches to provide powerful scalable statistical tools to the wind energy community. These include a variety of modeling methods like Bayesian networks, copula based dependence modeling, Gaussian processes, and a variety of optimization approaches based on sampling, and generative approaches. We closely work and collaborate with AWS Truepower. We have identified three different areas where machine learning and information technology can help improve wind systems performance. The first area helps in building a wind farm via. resource assessment. The second helps optimize the layout for a farm given turbine models, farm constraints and wind resource. We also developed an optimal power routing algorithm for large farms that improve the efficiency of the farms (current implementation of the algorithm is commercially used in OpenWind). Finally, we are interested in developing techniques to improve forecasting accuracy helping seamless integration of wind into our energy portfolio. |

Systems and machine learning projects: |

High performance computing on a multicore processor demands efficient parallelization. While dense matrices can be efficiently distributed among cores without concern for inter-chip transport costs, sparse matrix algebra requires consideration of data distribution and transport costs. In collaboration with Lincoln Labs, we have teamed a hierarchical GA with a fine grained computation model. The GAs (inner and outer) adaptively determine an efficient processor mapping for sparse matrix multiplication with respect to data processing and transport costs.(Learn More) |

We are developing scalable algorithms for a variety of NP hard problems in networks. These problems emerge in ad-hoc wireless networks, sensor networks. We have designed a distributed algorithms for network coding. |

We used genetic programming to automatically generate application specific and general compiler priority functions. These functions are known as the "Achilles Heel" because typically compiler designers develop them by hand and test them on problem instances that rapidly drift out of date. Our priority functions worked in the context of hyperblock scheduling and register allocation. A powerpoint from a PLDI presentation is available as a pdf. |

Support Vector Machines are an example of a recently developed machine learning algorithm that has rapidly been adopted by a wide range of application programmers as a means of classifying and performing data regression. |

We are investigating how design knowledge can easily be elicited from an expert designer to be exploited by an algorithm that returns to the designer a suite of pareto-optimal (i.e. non-dominated) designs. These designs present different tradeoffs with respect to multiple objectives and allow the designer or control algorithm to choose between them. The choice can be updated according to the current critical performance specifications. The technical challenge is to efficiently explore the space of possible solutions with scalable techniques that accomodate high dimensionality and multiple objectives. |

Convex optimization techniques such as geometric programming and semi-definite programming are powerful techniques for design and optimization. However, they require the design problem to be modeled with a specific formulation such as a posynomial/monomial objective, constraint or sum-of-squares objective. This is often not straight forward to accomplish accurately. |

Model-free methods such as evolutionary algorithms allow reconfigurable systems to adapt or self-tune based solely on performance feedback. Analog reconfigurable systems have potential payoffs in two areas. |

Computer architecture and application complexity is rapidly increasing. With the adoption of multi-core processors for desktop computing, workloads are less predictable because applications are more complex in terms of thread parallelism and diverse computation demands. Decentralized adaptive strategies within the operating system or runtime system potentially are a scalable solution to handling this complexity. We investigate computational economic mechanisms that allow individual software components to introspect on performance and adapt their run time resource requests like they would in a market place of sellers and consumers. |

We have developed an evolvable hardware testbench named GRACE. Grace's software component includes an evolutionary algorithm that generates sized analog circuit topologies. Evolved circuit designs are directly tested in silicon. They are each dynamically configured on an Field Programmable Analog Array then exercised with input signal while their output behaviour is captured and evaluated. GRACE is extensible. We plan to pursue using a highly complex reconfigurable circuit environment to evolve complex circuits such as an ADC. |