Past Projects – Lean Launchpad Singapore
Nus enterprise website

Past Projects

INFOCOMM TECHNOLOGY

  • 2013
    (1st run)

    Digital Taste

    NUS, CUTE, School of Computing

    Principal Investigators: Professor Do Yi Luen Ellen
    Entrepreneurial Leads: Nimesha Ranasinghe, PhD Student
    Team members: Kuanyi Lee

    Project description: Digital Taste Interface simulates the sensation of taste through thermal and electrical stimulation on the human tongue. It has two main modules: the control system and the wearable tongue interface. The control system formulates different properties of stimuli, the tongue interface applies the stimuli on user’s tongue to simulate different taste sensations.

  • 2013
    (1st run)

    Smart Button

    NUS, HCI Lab, School of Computing

    Principal Investigators: Asst Prof Shengdong Zhao
    Entrepreneurial Leads: Chen Zhao, Research Assistant
    Team members: Leng Li Chuan Brian, Biyan Zhou, Kin Yong Leong, Yawen Li

    Project description: Single-click control of “any app / any platform”. An adaptable, simple physical control for senior citizens to use, and an intuitive software toolkit for a computer literate helper to link and customize functionalities from computer applicants or electronic devices to the physical control.

     

     


     

  • 2013
    (1st run)

    Jiku Live

    NUS, NExT, School of Computing

    Principal Investigators: Assoc Prof Ooi Wei Tsang
    Entrepreneurial Lead 1: Nguyen Vu Thanh, Research Fellow
    Entrepreneurial Lead 2: Daryl Chew

    Project description: The Jiku mobile app allows multiple users to browse, watch, zoom and pan, automated tracking of objects, record, and share video streams with others. This could be used for stored videos as well as live video stream. The app also monitors what other users like or do not like about each video clip, and recommends video clips that the user may like through a collaborative filtering algorithm. The Jiku Director system automatically analyzes input video streams of an events and generate a new video that switches between the different input streams to produce a “directed” video stream of the event based on the interestingness of the content and the quality of the input streams.