Robotics Engineering Technical Skills

Conheça conteúdos de destaque no LinkedIn criados por especialistas.

  • Ver perfil de Jim Fan
    Jim Fan Jim Fan é um Influencer

    NVIDIA Director of AI & Distinguished Scientist. Co-Lead of Project GR00T (Humanoid Robotics) & GEAR Lab. Stanford Ph.D. OpenAI's first intern. Solving Physical AGI, one motor at a time.

    237.722 seguidores

    Exciting updates on Project GR00T! We discover a systematic way to scale up robot data, tackling the most painful pain point in robotics. The idea is simple: human collects demonstration on a real robot, and we multiply that data 1000x or more in simulation. Let’s break it down: 1. We use Apple Vision Pro (yes!!) to give the human operator first person control of the humanoid. Vision Pro parses human hand pose and retargets the motion to the robot hand, all in real time. From the human’s point of view, they are immersed in another body like the Avatar. Teleoperation is slow and time-consuming, but we can afford to collect a small amount of data.  2. We use RoboCasa, a generative simulation framework, to multiply the demonstration data by varying the visual appearance and layout of the environment. In Jensen’s keynote video below, the humanoid is now placing the cup in hundreds of kitchens with a huge diversity of textures, furniture, and object placement. We only have 1 physical kitchen at the GEAR Lab in NVIDIA HQ, but we can conjure up infinite ones in simulation. 3. Finally, we apply MimicGen, a technique to multiply the above data even more by varying the *motion* of the robot. MimicGen generates vast number of new action trajectories based on the original human data, and filters out failed ones (e.g. those that drop the cup) to form a much larger dataset. To sum up, given 1 human trajectory with Vision Pro  -> RoboCasa produces N (varying visuals)  -> MimicGen further augments to NxM (varying motions). This is the way to trade compute for expensive human data by GPU-accelerated simulation. A while ago, I mentioned that teleoperation is fundamentally not scalable, because we are always limited by 24 hrs/robot/day in the world of atoms. Our new GR00T synthetic data pipeline breaks this barrier in the world of bits. Scaling has been so much fun for LLMs, and it's finally our turn to have fun in robotics! We are creating tools to enable everyone in the ecosystem to scale up with us: - RoboCasa: our generative simulation framework (Yuke Zhu). It's fully open-source! Here you go: http://robocasa.ai - MimicGen: our generative action framework (Ajay Mandlekar). The code is open-source for robot arms, but we will have another version for humanoid and 5-finger hands: https://lnkd.in/gsRArQXy - We are building a state-of-the-art Apple Vision Pro -> humanoid robot "Avatar" stack. Xiaolong Wang group’s open-source libraries laid the foundation: https://lnkd.in/gUYye7yt - Watch Jensen's keynote yesterday. He cannot hide his excitement about Project GR00T and robot foundation models! https://lnkd.in/g3hZteCG Finally, GEAR lab is hiring! We want the best roboticists in the world to join us on this moon-landing mission to solve physical AGI: https://lnkd.in/gTancpNK

  • Ver perfil de Clem Delangue 🤗
    Clem Delangue 🤗 Clem Delangue 🤗 é um Influencer

    Co-founder & CEO at Hugging Face

    301.342 seguidores

    🦾 Great milestone for open-source robotics: pi0 & pi0.5 by Physical Intelligence are now on Hugging Face, fully ported to PyTorch in LeRobot and validated side-by-side with OpenPI for everyone to experiment with, fine-tune & deploy in their robots! π₀.₅ is a Vision-Language-Action model which represents a significant evolution from π₀ to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training. Generalization must occur at multiple levels: - Physical Level: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments - Semantic Level: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills - Environmental Level: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals The breakthrough innovation in π₀.₅ is co-training on heterogeneous data sources. The model learns from: - Multimodal Web Data: Image captioning, visual question answering, object detection - Verbal Instructions: Humans coaching robots through complex tasks step-by-step - Subtask Commands: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed) - Cross-Embodiment Robot Data: Data from various robot platforms with different capabilities - Multi-Environment Data: Static robots deployed across many different homes - Mobile Manipulation Data: ~400 hours of mobile robot demonstrations This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously. Huge thanks to the Physical Intelligence team & contributors Model: https://lnkd.in/eAEr7Yk6 LeRobot: https://lnkd.in/ehzQ3Mqy

  • Ver perfil de François Candelon
    François Candelon François Candelon é um Influencer

    Partner Value Creation at Seven2

    14.574 seguidores

    I'm delighted to be a co-author of this research, conducted in collaboration with professors from Harvard, MIT, and Wharton, that explores what actually happens when humans and GenAI work together. As a Partner at Seven2 where I focus extensively on AI transformation, this work is at the heart of the questions we tackle daily with our portfolio companies. Our new study reveals three distinct types of interaction: "Cyborgs, Centaurs and Self-Automators: Human-GenAI Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling" 📄 🔗 Paper: https://lnkd.in/eHfq2yRZ 🎥 Short Video: https://lnkd.in/eDN8arH7 Drawing on a field study of 244 global management consultants at BCG, we identify three distinct modes of human–AI interaction that unfold across real workflows: Cyborgs (Fused Knowledge Co-Creation) – human and GenAI continuously shape one another in a tightly fused decision process Centaurs (Directed Knowledge Co-Creation) – human steers the process while leveraging AI capabilities Self-Automators (Abdicated Knowledge Co-Creation) – delegation of both task and decision to AI We show how these modes differ in who drives the work and what skills are cultivated, with implications for: ✔ How professionals develop domain and AI expertise ✔ Organizational strategy for upskilling ✔ The broader future of work in GenAI-augmented environments Check out the short video for an overview, and dive into the full paper via the link above! Whether you're interested in AI adoption, workforce transformation, or productive human–machine collaboration, I'd love to hear your thoughts and feedback! 📘 Full paper: https://lnkd.in/eHfq2yRZ 🎥 Video: https://lnkd.in/eDN8arH7 #AI #GenerativeAI #FutureOfWork #KnowledgeWork #Research #Management #Innovation

  • Ver perfil de Samuel Oyefusi,  P.E, PMP®

    Ph.D Candidate (incoming)| Ms Robotics @Wπ | ROScon ’25 Diversity Scholar | WPI Provost Scholar | Inventor

    11.995 seguidores

    A few years ago, I learned the hard way that jumping straight into hardware, sensors, motors, and wiring can lead to costly mistakes and late-night headaches. That’s when I discovered the true importance of #simulation in robotics and engineering. During the early phase of my final-year thesis, I spent weeks recreating our school cafeteria with Iman Tokosi in Blender, exporting it as an SDF model and loading it into Gazebo using #ROS2. Suddenly, I could drive a virtual robot through aisles and around tables without the fear of damaging anything real. It was challenging and eye-opening, and it saved me countless hours and resources. Then came the moment that changed everything: integrating #SLAM so the robot could build its own map while moving, and setting up #Nav2 to let it plan and follow paths autonomously. Watching it navigate the environment with precision and independence was a powerful confirmation that the system worked. Now, imagine a world where every structure, product, and system is simulated down to the smallest detail. The result? Reduced costs, faster development, increased reliability, enhanced safety, and stronger adherence to standards. Some may still view simulation as “just for show,” but I’ve experienced firsthand that it’s the foundation of true innovation. Are you leveraging simulation in your next robotics or engineering project? Let’s connect and exchange ideas!

    • + 6
  • Ver perfil de SUKIN SHETTY

    AI Architect | AI Product Builder | AI Educator Creator of Nemp Memory | Building GhostOps Helping Businesses & Individuals Build Real AI Systems

    8.093 seguidores

    AI Swarm Intelligence: Lessons from Nature to Optimize Business Decisions Ever notice how birds flock in perfect sync or ants find food with uncanny efficiency? That same principle many simple units acting together drives AI swarm intelligence. Instead of a single, resource-heavy model, small AI agents locally interact, share findings, and converge on the best solution. Understanding Swarm Intelligence What is Swarm Intelligence? Swarm intelligence is a collective behavior exhibited by decentralized, self-organized systems. Think of it as many “small brains” working together to form a super-intelligent system without any centralized control. This principle is observed in nature, Ant Colonies & Bird Flocks. In AI Terms: Swarm intelligence leverages multiple simple & small AI agents that interact locally with one another, leading to a global problem-solving strategy. Instead of relying on one monolithic, resource-heavy model, these agents collectively explore and optimize solutions. Swarm Intelligence in Action Practical Example Logistics: Agents independently assess routes, share data, and collectively decide the most efficient path,adapting instantly to traffic or demand shifts. This decentralized approach can quickly adapt to traffic changes, accidents, or sudden demand spikes, much like a flock of birds adjusting its course on the fly. Business Optimization with Swarm Intelligence Supply Chain Management: Scenario: A global retailer manages inventory across multiple warehouses. Swarm Approach: Small AI agents monitor local inventory levels, predict demand fluctuations, and communicate with each other to optimize stock distribution. Result: A highly adaptive, efficient supply chain that minimizes stockouts and reduces excess inventory. Adaptive and Resilient: Unlike traditional AI models, a swarm-based approach is inherently flexible. If one agent fails or encounters an unexpected obstacle, others seamlessly fill the gap. It’s like having a team of friends where if one friend forgets the directions, the rest can still get you to the party on time. Scalability: Swarm intelligence scales naturally. Whether you have 10 or 10,000 agents, the system’s performance improves as more data points contribute to the collective decision. Example: In urban planning, a swarm of sensors and agents can collaboratively monitor traffic, pollution, and energy consumption, leading to smarter, more responsive cities. Cost Efficiency: Instead of investing in one supercomputer model, businesses can deploy numerous smaller, cost-effective agents that work together, often yielding faster and more robust results. As we look to the future, It’s not just about creating smarter algorithms, it’s about reimagining how multiple, simple agents can collectively tackle complex challenges, much like nature has perfected over millions of years. What do you think? How could swarm intelligence transform your industry or business model?

  • Ver perfil de Andriy Burkov
    Andriy Burkov Andriy Burkov é um Influencer

    PhD in AI, author of 📖 The Hundred-Page Language Models Book and 📖 The Hundred-Page Machine Learning Book

    486.575 seguidores

    VLA models are systems that combine three capabilities into one framework: seeing the world through cameras, understanding natural language instructions like "pick up the red apple," and generating the actual motor commands to make a robot do it. Before these unified models existed, robots had separate modules for vision, language, and movement that were stitched together with manual engineering, which made them brittle and unable to handle new situations. This review paper covers over 80 VLA models published in the past three years, organizing them into a taxonomy based on their architectures—some use a single end-to-end network, others separate high-level planning from low-level control, some use diffusion models for smoother action sequences. The paper walks through how these models are trained using both internet data and robot demonstration datasets, then maps out where they're being applied. The later sections lay out the concrete technical problems that remain unsolved. Read online with an AI tutor: https://lnkd.in/eZdzYfdu PDF: https://lnkd.in/ezzncewE

  • Ver perfil de Ross Dawson
    Ross Dawson Ross Dawson é um Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35.549 seguidores

    All valuable work will increasingly be done by Human-AI hybrids. An insightful research paper identifies both challenges and good practices from multiple case studies to propose an overall framework. The authors propose that generating effective human-AI hybrids is divided into two phases: Construction - in which Technical implementers design the architecture of the hybrid - and Execution - where Organizational implementers facilitate how participants engage and interact. They suggest 3 primary success factors: 🔧 Interface and Technical Design focuses on making AI systems accessible and reliable through code-free interfaces. The technical architecture should allow rapid testing of different approaches while being supported by effective data curation strategies. 🧠 Human Capability Development prepares people to work effectively with AI systems through training, in critical assessment and prompting techniques. Employees must understand AI's capabilities and limitations, and develop skills to integrate AI into existing workflows. 🤝 The Collaboration Framework structures successful human-AI interaction through aligned mental models and clear role definitions. It emphasizes improving underperforming areas rather than disrupting successful processes, while ensuring both human and AI agents contribute their unique strengths to achieve optimal outcomes.

  • Ver perfil de Cam Stevens
    Cam Stevens Cam Stevens é um Influencer

    Safety Technologist & Chartered Safety Professional | AI, Critical Risk & Digital Transformation Strategist | Founder & CEO | LinkedIn Top Voice & Keynote Speaker on AI, SafetyTech, Work Design & the Future of Work

    13.243 seguidores

    I'm continuously fascinated by the evolving landscape of automation and robotics; it's why I work part-time as the Safety Innovation Lead at the Australian Automation and Robotics Precinct . With the rapid advancements in automation and robotics technology, the shift towards highly automated systems is inevitable, particularly in mining, but it also brings forth significant challenges and opportunities in managing health and safety. One of the significant challenges of safely integrating mobile machine automation into high risk industries is the inherent limitation of relying solely on human oversight as a risk control for autonomous systems. The resulting human work contains risks of boredom, confusion, cognitive limitations, loss of situational awareness, and automation bias which all contribute to degradation in human and organisational performance. These psychosocial risk factors highlight the urgent need for machines that can manage safety autonomously. At the Australian Automation & Robotics Precinct, we provide a unique sandbox for testing automation technologies. This environment allows us to push regulatory boundaries and innovate safely, ensuring that our advancements in automation are both effective and aligned with global safety standards. I've spent some time exploring robotics & automation in Europe over the past couple of years and will be visiting automation centres in the UK this week. Europe has consistently been at the forefront of machinery safety regulation. The recent publication of the updated EU Machinery Regulation 2023/1230 which becomes legally binding on January 20, 2027, is designed to ensure safe interaction between humans and machines, adapting continuously to technical developments (especially modern AI technologies). It sets a high standard that greatly influences global safety practices. Meanwhile, in Australia, while we rely on the AS/NZS 4024 series first published in the mid-1990s, there’s a growing need to update our standards to reflect the current technological landscape. If you're interested in learning more about the safety of mobile autonomous systems check out the paper titled "A comprehensive approach to safety for highly automated off-road machinery under Regulation 2023/1230" in the latest issue of Safety Science. And stay tuned for the official opening of the Australian Automation & Robotics Precinct HQ later in the year. #Automation #Robotics #MachineSafety #AI #SafetyInnovation #SafetyTechNews #SafetyTech

  • Ver perfil de Dr. Elie Metri

    “Dream big, act boldly, stay resilient”, Creator of the First Saudi Made Humanoid Robots Sara & Mohamed, Building The Future.

    18.225 seguidores

    As the creator of the first Saudi-made humanoid robots, “SARA” & “Mohamed” I believe the key to unlocking their full potential lies in designing them to reflect the culture, language, and customs of our region. Robots that speak our dialects, understand our traditions, and respect our values can truly resonate with people, driving adoption across industries in a way that feels natural and authentic.   This vision goes beyond functionality. It’s about creating robots that can connect on a human level; healthcare robots offering empathetic care in Arabic, educational robots engaging students with culturally relevant examples, or even customer service robots in retail and hospitality that mirror the warmth and respect our culture values.   To me, it’s not just about advancing technology; it’s about embedding our identity into it. By staying true to who we are, we can foster innovation while honouring the unique heritage of our region.   How else can we bring these cultural and linguistic nuances to life in robotics? I’d love to hear your thoughts. #SaudiTech #vision2030 #robotics #AI

  • Ver perfil de Antonio Grasso
    Antonio Grasso Antonio Grasso é um Influencer

    Technologist & Global B2B Influencer | Founder & CEO | LinkedIn Top Voice | Driven by Human-Centricity

    42.117 seguidores

    Adopting robotics in warehouse operations is a strategic move that boosts efficiency and safety and significantly reduces human error—transforming how modern warehouses function and setting new standards for operational excellence. Integrating robotics into warehouse operations involves using various types of robots, including Automated Guided Vehicles (AGVs), Autonomous Mobile Robots (AMRs), robotic arms, and drones. These robots enhance efficiency by working continuously and minimizing human errors. To maximize the benefits of robotic systems, workflow analysis, and careful technology selection are essential, and gradual implementation ensures smooth transitions. Robotic automation offers several benefits, such as increased productivity, enhanced safety, and reduced errors. However, challenges like high initial costs, maintenance, and staff training must be addressed. It's crucial to ensure that robots integrate well with existing warehouse systems and equipment, necessitating IT integration and interoperability. Continuous measurement and optimization are vital, as well as using key performance indicators (KPIs) and robot data to refine processes. Scalability and sustainability are also important, allowing for future expansion and choosing energy-efficient solutions to minimize environmental impact. #warehouse #robotics

Conhecer categorias