Brain Mind
Robotics
BioComputing


Product Market
Consumers
Stage 1:
The Infant Biocomputer
Stage 2: Stem Cells
Grafting of Neural Networks
Apply for
Employment
Management
Administration

Machine Learning
Engineers
Speech Learning
Engineers
Vision Learning
Engineers
Robotics
Movement Engineers
Stem Cell
Computing

Brain Mind Robotics
BioComputing

"Silicon Valley" (Palo Alto - San Jose) California


We Are Building a Robotic Bio-Computer, Based on the Functional Neuroanatomy of the Human Brain, and Which Can Speak, Reason, Read, Understand Language, Experience Self-Consciousness and Human Emotions, Think Creatively, and Physically Interact with the Environment.

We Are Hiring Engineers With Experience In Computer Science, Machine Learning, Robotics, Artificial Intelligence, and the Creation of Auditory and Visual Platforms For Speech, Object, and Face Recognition.

STAGE 1: The creation of a stand-alone unit which is programmed to "reflexively" respond to simple visual and auditory stimuli, and to "reflexively" move the "eyes", turn the "head", open and close the "mouth" and its "hands", and make sucking, chewing, swallowing, swimming, and leg-lifting stepping movements, and to raise its "arms" and touch its "mouth" and "face."

Vision Learning and Visual Recognition Engineers--Robotics--Qualifications

Experience developing computer vision algorithms, and machine learning and deep learning algorithms.
Proficiency in image and video algorithm such as SIFT, SURF, STIP, SfM, SLAM, Multi-View Stereo (MVS).
Experience with Artificial Intelligence with an emphasis in deep machine learning such as Convolutional Neural Networks, Recurrent Neural Networks.
Proficient in mathematical and statistical optimization theory and techniques.
Programming experience in one or more of the following: C, C++, scala, R, Python, MATLAB, Objective-C, Swift.
Develop realistic AI/machine learning solutions.
Strong background in linear algebra, geometry, etc.
Experience in graphics programming, computer vision, machine learning, deep learning, OpenCV, SLAM, image matching, feature tracking, and object classification.
Experience in working with data from Kinect , RGB cameras, depth cameras and point clouds.
Experience fusing data from multiple sensors: imaging, audio and others.
Experience operating on large data sets, data collection, data labeling, feature design, training, cross-validation, feature down-selection, algorithm assessment, feature definition, indexing, search, information extraction, and performance optimization.
Ability to assist in the design of a visual system capable of detecting movement, shape, size, and recognizing objects, hands, and faces.


*****