We have developed 4 basic modular key capabilities for our robots to emulate human interactions and create unique experiences depending on your needs

x
To fully exploit our robots, we integrate them into the organizations legacy systems to leverage all the available information and assets already built. This way, two robots may have the same hardware capabilities, but the personalization comes through the integration. We have extensive experience in the integration of information systems, which has allowed us to generate adaptable connectors to the main communication protocols and standards in the industry: ERP Interface, SAP BAPI, API Rest, JSON, Apache Kafka GraphQL, Grpc, RMI, ODBC etc.
x
Robotics vision is the result of combining robotics with computer vision. This is also a necessary capability for humanoids robots, since this allows them to identify the environment in which they are, the agents around them and movement patterns among other skills. The robotics vision module is 100% integrated with the hardware of our robots, we use the main machine learning frameworks to train image processing models to solve different types of use cases. The Roomie IT team has data sets of previously labeled images of the most common detection scenarios for objects and people in: retail, healthcare, banking, insurance, etc.
x
One of the main components for a robot, to be considered “humanoid”, is the ability to move autonomously. Our patent-pending technology allows the robot to emulate human movement and enables further interaction by approaching desired targets such as customers. Our mobility module is the most important of all our components, it has a system architecture that bases its operation on a multilayer LIDAR sensor, which through integration with depth cameras such as the Intel real sense, allows a uniform movement, which avoids collision with static or dynamic obstacles that the robot may face during its route.
x
The voice user interface module enables the communications between humans and computers. This module allows our robots to have fluid conversation with the end users, the module uses 3 cloud native services: Speech to text, to catch user voice in an audio stream, which transcribes the audio into text in real time. Natural language processing, once the user’s information is transformed into text, a cloud service is used that allows the user’s intention that consist of unstructured data and returns a response. Text to speech, transforms the response of the NLP engine into audio so that the robot responds with its voice to the user.

So far, we’ve integrated these component into 3 different models.

Social

Create a unique, personalized and consistent experience for your customers

Delivery

Make the delivery of your product cost efficient and innovative with our distribution solutions

Security

Do you want to keep a constant eye on your business? How does an artificial intelligent guard available 24/7 sounds like?