Robotics In Science Projects

Conheça conteúdos de destaque no LinkedIn criados por especialistas.

  • Ver perfil de Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778.383 seguidores

    These students were challenged to build a robot capable of scaling a vertical wall in record time, a task that mirrors real engineering problems faced by aerospace, manufacturing, and autonomous robotics teams worldwide. Will you be able to win? To succeed, each group had to master a full engineering cycle: 🔹 Mechanical design: calculating torque, motor ratios, surface grip, and center of gravity 🔹 Material selection: optimizing weight-to-strength ratios (aluminum, carbon fiber, 3D-printed composites) 🔹 Control algorithms: PID tuning, sensor feedback loops, and stability control 🔹 Energy efficiency: maximizing battery output and motor load under vertical stress 🔹 Failure analysis: testing, measuring, iterating, and rebuilding And this isn’t just academic. Challenges like this reflect real-world robotics breakthroughs: 📌 NASA’s Valkyrie robot uses similar balance and grip logic for climbing unstable surfaces in disaster response missions. 📌 Boston Dynamics spent over 10 years perfecting the control systems students experiment with on a smaller scale. 📌 Industrial robots used in warehouses face the same physics constraints — friction, payload, torque, and trajectory planning. 📌 Spacecraft design teams use identical modeling principles to ensure robots can maneuver on asteroids with extremely low gravity. And student innovation is accelerating fast: 🚀 University robotics teams report up to 40% faster prototype cycles thanks to rapid 3D printing. 🚀 High-school robotics programs now routinely use LIDAR, machine vision, and ROS, tools once limited to major research labs. 🚀 Over 90% of global robotics firms hire from hands-on competition pipelines like FIRST, VEX, and Eurobot. 🚀 The educational robotics market is growing 17% annually, driven by demand for engineers who can build, code, and troubleshoot under real conditions. Competitions like this create the mindset industry needs: not memorization, but building, breaking, fixing, optimizing — the same loop that drives innovation at the world’s leading tech companies. One student prototype at a time, the future of automation, AI, and robotics is already climbing upward. 🚀🤝 #Engineering #Robotics #STEM #Innovation #Education #AI #Automation #FutureOfWork #NextGenTech

  • Ver perfil de Ross Dawson
    Ross Dawson Ross Dawson é um Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35.548 seguidores

    Very promising! A new open-source platform for research on Human-AI teaming from Duke University uses real-time human physiological and behavioral data such as eye gaze, EEG, ECG, across a wide range of test situations to identify how to improve Human-AI collaboration. Selected insights from the CREW project paper (link in comments): 💡 Comprehensive Design for Collaborative Research. CREW is built to unify multidisciplinary research across machine learning, neuroscience, and cognitive science by offering extensible environments, multimodal feedback, and seamless human-agent interactions. Its modular design allows researchers to quickly modify tasks, integrate diverse AI algorithms, and analyze human behavior through physiological data. 🔄 Real-Time Interaction for Dynamic Decision-Making. CREW’s real-time feedback channels enables researchers to study dynamic decision-making and adaptive AI responses. Unlike traditional offline feedback systems, CREW supports continuous and instantaneous human guidance, crucial for simulating real-world scenarios, and making it easier to study how AI can best align with human intentions in rapidly changing environments. 📊 Benchmarking Across Tasks and Populations. CREW enables large-scale benchmarking of human-guided reinforcement learning (RL) algorithms. By conducting 50 parallel experiments across multiple tasks, researchers could test the scalability of state-of-the-art frameworks like Deep TAMER. This ability to scale the study of the interaction of human cognitive traits with AI training outcomes is a first. 🌟 Cognitive Traits Driving AI Success. The study highlighted key human cognitive traits—spatial reasoning, reflexes, and predictive abilities—as critical factors in enhancing AI performance. Overall, individuals with superior cognitive test scores consistently trained better-performing agents, underscoring the value of understanding and leveraging human strengths in collaborative AI development. Given that Humans + AI should be at the heart of progress, this platform promises to be a massive enabler of better Human-AI collaboration. In particular, it can help in designing human-AI interfaces that apply specific human cognitive capabilities to improve AI learning and adaptability. Love it!

  • Ver perfil de Cher Whee Sim

    Vice President, People Strategy, Technology & Talent Acquisition

    8.266 seguidores

    How do we prepare today’s young minds to solve tomorrow’s greatest challenges? Enter Empowering Young Minds – a collaborative initiative igniting curiosity in STEM and AI across Malaysia, India, and Singapore. Solve Education! Foundation and Micron Foundation are teaming up to drive tech access and prove that innovation isn’t a privilege; it’s a right. Over the next three years, 10,000+ students in these three key markets will have the opportunity to explore robotics, coding, and AI through virtual projects and in-person camps, where creativity meets technology. But here’s the bigger picture: It’s more than gadgets and equations. It’s about preparing today’s youth for a world where AI is as fundamental as reading, and where their ideas could solve the toughest global challenges. Micron’s vision? To build communities where tech bridges gaps, not widens them. Bridging divides so that a student in a rural classroom has the same potential as one in a city lab. I’m so grateful to be part of initiatives that truly believe in young people’s potential to innovate, connect, and lead. Let’s rally behind that future! #MicronFoundation #SolveEducation #Education #AI #STEM 

  • Ver perfil de Himanshu J.

    Building Aligned, Safe and Secure AI

    29.189 seguidores

    A new paper from Technical University of Munich and Universitat Politècnica de Catalunya Barcelona explores the architecture of autonomous LLM agents, emphasizing that these systems are more than just large language models integrated into workflows. Here are the key insights:- 1. Agents ≠ Workflows Most current systems simply chain prompts or call tools. True agents plan, perceive, remember, and act, dynamically re-planning when challenges arise. 2. Perception Vision-language models (VLMs) and multimodal LLMs (MM-LLMs) act as the 'eyes and ears', merging images, text, and structured data to interpret environments such as GUIs or robotics spaces. 3. Reasoning Techniques like Chain-of-Thought (CoT), Tree-of-Thought (ToT), ReAct, and  Decompose, Plan in Parallel, and Merge (DPPM) allow agents to decompose tasks, reflect, and even engage in self-argumentation before taking action. 4. Memory Retrieval-Augmented Generation (RAG) supports long-term recall, while context-aware short-term memory maintains task coherence, akin to cognitive persistence, essential for genuine autonomy. 5. Execution This final step connects thought to action through multimodal control of tools, APIs, GUIs, and robotic interfaces. The takeaway? LLM agents represent cognitive architectures rather than mere chatbots. Each subsystem, perception, reasoning, memory, and action, must function together to achieve closed-loop autonomy. For those working in this field, this paper titled 'Fundamentals of Building Autonomous LLM Agents' is an interesting reading:- https://lnkd.in/dmBaXz9u #AI #AgenticAI #LLMAgents #CognitiveArchitecture #GenerativeAI #ArtificialIntelligence

  • Ver perfil de Mikaël Wornoo🐺

    Studying the collision of AI & Work | Founder @ TechWolf🐺

    9.089 seguidores

    Why should HR leaders care about video AI breakthroughs? Because it turns out they might be the foundation for a GPT-like breakthrough in robotics. And hence, the automation of manual labor. Veo3 is a model released by Google that is excellent at generating video content. However, there's something much more interesting happening behind the scenes. The model achieves 90%+ accuracy on perception tasks without specific training. Not because it generates prettier videos, but because predicting the next frame forces understanding of how objects move, fall, and interact. It essentially learned how to predict physics! This mirrors what happened with language models. Predicting the next word created understanding. Now predicting the next frame creates >spatial< intelligence. The breakthrough is not about the output quality at all. It's that the model had to learn physics to generate coherent sequences. Now consider the impact on robotics. A humanoid robot using these models doesn't need millions of training runs for each task. It already understands momentum, gravity, object permanence. The Veo 3 paper shows 62 distinct capabilities emerging from video prediction alone. A robot with this model can now pick up a jar, understand which way to twist the lid, and open it - without ever being programmed for that specific jar type. It can throw a ball to another robot and predict where to position its hands to catch it. It can even identify that a hammer should be grasped by the handle, not the head. All from visual understanding alone. We might have our ChatGPT moment for robotics sooner than expected...

  • Ver perfil de Daily Papers

    Machine Learning Engineer at Hugging Face

    11.836 seguidores

    ByteDance Seed has introduced Visual Spatial Tuning (VST), a new framework designed to empower Vision-Language Models (VLMs) with truly human-like visuospatial abilities. This research addresses a critical limitation in current VLMs: effectively understanding and reasoning about spatial relationships in visual inputs, which is essential for physically grounded AI. VST avoids the need for complex, specialized expert encoders, which often add overhead and can negatively impact a VLM's general capabilities. Instead, it proposes a comprehensive approach built on two innovative datasets and a progressive training pipeline. The VST-Perception (VST-P) dataset, comprising 4.1 million samples across 19 skills, significantly enhances spatial perception across single images, multi-image scenarios, and videos. Complementing this is the VST-Reasoning (VST-R) dataset, with 135K curated samples that teach models to reason in space, utilizing Chain-of-Thought (CoT) and rule-based data for reinforcement learning. The team also highlights innovative techniques like prompting with Bird's-Eye View (BEV) annotations for clearer spatial reasoning. The progressive training pipeline starts with supervised fine-tuning to build foundational spatial knowledge, followed by reinforcement learning to further refine reasoning abilities. The results are impressive: VST achieves state-of-the-art performance on several spatial benchmarks, including 34.8% on MMSI-Bench and 61.2% on VSIBench, all while maintaining strong general VLM capabilities. Perhaps most excitingly, VST provides a significant boost to Vision-Language-Action (VLA) models, making them more adept at understanding and navigating real-world environments for applications in robotics and beyond. This is a crucial step toward more intelligent, interactive AI systems. Explore the paper and models on Hugging Face: Paper: https://lnkd.in/eeBcYGZ6 Models: https://lnkd.in/eKQSw7k4 Project Page: https://lnkd.in/esKWZiZb

  • Ver perfil de Lukas M. Ziegler

    Robotics evangelist @ planet Earth 🌍 | Telling your robot stories.

    242.246 seguidores

    Build your first robot in simulation! 👾 📌 If you’re self-learning robotics, this is genuinely one of the better repos to save for later. NVIDIA Robotics released a "Getting Started with Isaac Sim" tutorial series covering everything from building your first robot to hardware-in-the-loop deployment. What's inside? → Building Your First Robot Explore the Isaac Sim interface, construct a simple robot model (chassis, wheels, joints), configure physics properties, implement control mechanisms using OmniGraph and ROS 2, integrate sensors (RGB cameras, 2D lidar), and stream sensor data to ROS 2 for real-time visualization in RViz. → Ingesting Robot Assets Import URDF files, prepare simulation environments, add sensors to existing robot models, and access pre-built robots to accelerate development. → Synthetic Data Generation Learn perception models for dynamic robotic tasks, understand synthetic data generation, apply domain randomization with Replicator, generate synthetic datasets, and fine-tune AI perception models with validation. → Software-in-the-Loop (SIL) Build intelligent robots, implement SIL workflows, use OmniGraph for robot control, master Isaac Sim Python scripting, deploy image segmentation with ROS 2 and Isaac ROS, and test with and without simulation. → Hardware-in-the-Loop (HIL) Understand HIL fundamentals, learn NVIDIA Jetson platform, set up the Jetson environment, and deploy Isaac ROS on Jetson hardware. The progression makes sense: start with basics (build a robot), add perception (sensors and data), generate training data (synthetic generation), develop software (SIL), then deploy to hardware (HIL). Each module builds on the previous one. For robotics teams, this is the path to faster iteration. Simulate first, validate in software-in-the-loop, generate synthetic training data at scale, then deploy to hardware with confidence. 🎓 If this helps at least one engineer to become more fluent in the world of robotics, means a lot to me! 🫶🏼 Here's the course (it's free): https://lnkd.in/dRYdkmdi ~~ ♻️ Join the weekly robotics newsletter, and never miss any news → ziegler.substack.com

  • Ver perfil de Mateusz Sadowski

    Robotics Developer Relations | Founder @ Weekly Robotics

    17.116 seguidores

    A research team incorporated a microphone into soft robotic fingertips to detect fabrics through sound, achieving 97% classification accuracy on 20 common fabrics. The system combines internal vision with audio sensing to create multimodal touch perception. The technique is surprisingly simple but effective—the microphone picks up the acoustic signatures as the fingertip interacts with different fabric textures. Combined with visual feedback from inside the finger, it creates a classification system that matches human-level performance. The embedded video on their site is genuinely impressive—robotic ASMR meets practical manipulation. If you're working on tactile sensing or dexterous manipulation, this multimodal approach is worth exploring. Project page: https://lnkd.in/dxEkqbJ3 Paper: https://lnkd.in/dR2RXUd7 Via Weekly Robotics 349: https://lnkd.in/dhCW3WrR #Robotics #Research #TactileSensing #MachineLearning #SoftRobotics

  • Ver perfil de Charu Jain

    Executive Director at COER University | BITS Pilani | IIMC

    20.777 seguidores

    India is preparing for its biggest education upgrade yet — and it begins in Class 3. The Ministry of Education has officially announced that lessons on Artificial Intelligence (AI) and Computational Thinking (CT) will now be introduced into the national curriculum from Grade 3 onwards. Not as a hobby. Not as an optional module. But as a universal skill — like reading, writing, or arithmetic. This is not just a syllabus change. This is a mindset shift. Because the world our children are growing into demands more than memorization.   It demands problem-solving, creativity, digital literacy, and the ability to think computationally. Children who learn the fundamentals of AI early on:   ✅ Develop analytical and logical reasoning. ✅ Understand how technology really works (not just how to use it). ✅ Build confidence to innovate rather than consume. ✅ Grow up future-ready — not future-fearing. Just like mathematics shapes numerical thinking and language shapes communication, AI will shape how tomorrow’s generation solves problems and innovate. However, to teach AI effectively, we need: ➜ Teachers trained to understand and simplify complex concepts. ➜ Schools equipped with tools and hands-on learning environments. ➜ A curriculum that focuses on curiosity, not marks. India is already showing signs of readiness:   ✔ EdTech platforms are building curriculum-aligned AI learning modules. ✔ Schools are adopting STEM labs, robotics clubs, and maker spaces.  ✔ Teachers are getting upskilled through national training programmes. The foundation is being laid.   Now we must ensure we build a learning culture that empowers students not to fear AI — but to lead it. Do you believe AI should be taught as early as primary school? — Charu Jain #AIinEducation #NEP2020 #FutureSkills #EdTech #ComputationalThinking #DigitalIndia #FutureReady

  • Ver perfil de Moumita Paul

    Robotics/AI

    4.269 seguidores

    What if robots could react, not just plan? A good read: https://lnkd.in/gEGSp_5U This paper proposes a Deep Reactive policy (DRP), a visuo-motor neural motion policy designed for generating reactive motions in diverse dynamic environments, operating directly on point cloud sensory input. Why does it matter? Most motion planners in robotics are either: Global optimizers: great at finding the perfect path, but they are way too slow and brittle in dynamic settings. Reactive controllers: quick on their feet, but they often get tunnel vision and crash in cluttered spaces. DRP claims to bridge the gap. And what makes it different? 1. IMPACT (transformer core): pretrained on 10 million generated expert trajectories across diverse simulation scenarios. 2. Student–teacher fine-tuning: fixes collision errors by distilling knowledge from a privileged controller (Geometric Fabrics) into a vision-based policy. 3. DCP-RMP (reactive layer): basically a reflex system that adjusts goals on the fly when obstacles move unexpectedly. Results are interesting for real-world evaluation: Static environments: Success Rate: DRP 90% | NeuralMP 30% | cuRobo-Voxels 60% Goal Blocking: Success Rate: DRP 100% | NeuralMP 6.67% | cuRobo-Voxels 3.33% Goal Blocking: Success Rate: DRP 92.86% | NeuralMP 0% | cuRobo-Voxels 0% Dynamic Goal Blocking: Success Rate: DRP 93.33% | NeuralMP 0% | cuRobo-Voxels 0% Floating Dynamic Obstacle: Success Rate: DRP 70% | NeuralMP 0% | cuRobo-Voxels 0% What stands out from the results is how well DRP handles dynamic uncertainty, the very scenarios where most planners collapse. NeuralMP, which relies on test-time optimization, simply can’t keep up with real-time changes, dropping to 0 in tasks like goal blocking and dynamic obstacles. Even cuRobo, despite being state-of-the-art in static planning, struggles once goals shift or obstacles move. DRP’s strength seems to come from its hybrid design: the transformer policy (IMPACT) gives it global context learned from millions of trajectories, while the reactive DCP-RMP layer gives it the kind of “reflexes” you normally don’t see in learned systems. The fact that it maintains 90% success even in cluttered or obstructed real-world environments suggests it isn’t just memorizing scenarios; it has genuinely learned a transferable strategy. That being said, the dependence on high-quality point clouds is a bottleneck. In noisy or occluded sensing conditions, performance may degrade. Also, results are currently limited to a single robot platform (Franka Panda). So this paper is less about replacing classical planning and more about rethinking the balance between experience and reflex. 

Conhecer categorias