Using Technology in Scientific Research

Conheça conteúdos de destaque no LinkedIn criados por especialistas.

  • Ver perfil de Yossi Matias

    Vice President, Google. Head of Google Research.

    54.042 seguidores

    New research prototype for Personal Health Agent (PHA), a comprehensive research framework for delivering personalized, evidence-based health and wellness guidance. This system is built on a multi-agent framework that models support after a human expert team, each handled by a specialized LLM sub-agent: ▶️ Data Science Agent: Analyzes multi-modal data from wearables and health records, such as blood biomarkers, to provide contextualized numerical insights. ▶️ Domain Expert Agent: Acts as a reliable source of grounded health knowledge, tailoring information based on the user's specific health profile. ▶️ Health Coach Agent: Supports users in goal-setting and behavioral change through multi-turn, psychologically-inspired conversations. The Orchestrator dynamically coordinates these specialists to synthesize a single, coherent response to complex queries. Evaluations confirmed that this collaborative multi-agent approach significantly outperformed single-agent baselines in overall response quality, clinical significance, effectiveness and usefulness as evaluated by human experts and end-users. This work, including extensive evaluation of all agentic components using the Wearables for Metabolic Health (WEAR-ME) study data, establishes a validated blueprint for the next generation of trustworthy and coherent personal health AI. Read more about this research and the multi-agent framework: https://goo.gle/42kzjvZ Preprint: https://lnkd.in/dfZ96X5c

  • Ver perfil de Michał Choiński

    AI Research and Voice | Driving meaningful Change | IT Lead | Digital and Agile Transformation | Speaker | Trainer | DevOps ambassador

    11.925 seguidores

    Research isn’t just gathering facts. It’s a structured, layered process, starting with framing the right question, pulling diverse sources, and synthesizing meaningful insight. An analyst might spend hours deciding where to look, validating sources, cross-checking contradictions, and shaping a usable output. That’s often many days of work for a well-rounded report. AI change the mechanics. With a well-structured prompt, a language model can simulate this entire workflow in parallel: → Scanning dozens of sources → Filtering based on context and credibility → Surfacing inconsistencies → And synthesizing a clear, structured report The outcome? What takes a human team days can be delivered in under 30 minutes, without cutting corners. But let’s be precise about what’s happening in those 30 minutes: Behind the scenes, the model: →Understands the brief instantly →Searches and filters live data →Reads and cross-checks 30–50+ sources →Writes structured content in real-time →Generates visuals on demand →Packages it all together What would take a team of humans: →Hours of sequential effort →Multiple roles (researcher, writer, designer, editor) →Coordination and review cycles gets compressed into parallel tasks executed within seconds or minutes. So yes, you receive the report in 30 minutes. But what you’re getting is hours of analysis, compressed, structured, and scaled. That’s the value of deep research with LLMs: Speed, yes, but more importantly: structure, insight, and strategic value. 🎥 In the video tutorial, we walk through a real use case: How we used ChatGPT’s deep research capabilities and Gamma to build a full competitor analysis report 

  • Ver perfil de Ross Dawson
    Ross Dawson Ross Dawson é um Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35.548 seguidores

    A nice review article "Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation" covers the scope of tools and approaches for how AI can support science. Some of areas the paper covers: (link in comments) 🔎 Literature search and summarization. Traditional academic search engines rely on keyword-based retrieval, but AI-powered tools such as Elicit and SciSpace enhance search efficiency with semantic analysis, summarization, and citation graph-based recommendations. These tools help researchers sift through vast scientific literature quickly and extract key insights, reducing the time required to identify relevant studies. 💡 Hypothesis generation and idea formation. AI models are being used to analyze scientific literature, extract key themes, and generate novel research hypotheses. Some approaches integrate structured knowledge graphs to ground hypotheses in existing scientific knowledge, reducing the risk of hallucinations. AI-generated hypotheses are evaluated for novelty, relevance, significance, and verifiability, with mixed results depending on domain expertise. 🧪 Scientific experimentation. AI systems are increasingly used to design experiments, execute simulations, and analyze results. Multi-agent frameworks, tree search algorithms, and iterative refinement methods help automate complex workflows. Some AI tools assist in hyperparameter tuning, experiment planning, and even code execution, accelerating the research process. 📊 Data analysis and hypothesis validation. AI-driven tools process vast datasets, identify patterns, and validate hypotheses across disciplines. Benchmarks like SciMON (NLP), TOMATO-Chem (chemistry), and LLM4BioHypoGen (medicine) provide structured datasets for AI-assisted discovery. However, issues like data biases, incomplete records, and privacy concerns remain key challenges. ✍️ Scientific content generation. LLMs help draft papers, generate abstracts, suggest citations, and create scientific figures. Tools like AutomaTikZ convert equations into LaTeX, while AI writing assistants improve clarity. Despite these benefits, risks of AI-generated misinformation, plagiarism, and loss of human creativity raise ethical concerns. 📝 Peer review process. Automated review tools analyze papers, flag inconsistencies, and verify claims. AI-based meta-review generators assist in assessing manuscript quality, potentially reducing bias and improving efficiency. However, AI struggles with nuanced judgment and may reinforce biases in training data. ⚖️ Ethical concerns. AI-assisted scientific workflows pose risks, such as bias in hypothesis generation, lack of transparency in automated experiments, and potential reinforcement of dominant research paradigms while neglecting novel ideas. There are also concerns about the overreliance on AI for critical scientific tasks, potentially compromising research integrity and human oversight.

  • Ver perfil de Sahar Mor

    I help researchers and builders make sense of AI | ex-Stripe | aitidbits.ai | Angel Investor

    41.826 seguidores

    Researchers from Oxford University just achieved a 14% performance boost in mathematical reasoning by making LLMs work together like specialists in a company. In their new MALT (Multi-Agent LLM Training) paper, they introduced a novel approach where three specialized LLMs - a generator, verifier, and refinement model - collaborate to solve complex problems, similar to how a programmer, tester, and supervisor work together. The breakthrough lies in their training method: (1) Tree-based exploration - generating thousands of reasoning trajectories by having models interact (2) Credit attribution - identifying which model is responsible for successes or failures (3) Specialized training - using both correct and incorrect examples to train each model for its specific role Using this approach on 8B parameter models, MALT achieved relative improvements of 14% on the MATH dataset, 9% on CommonsenseQA, and 7% on GSM8K. This represents a significant step toward more efficient and capable AI systems, showing that well-coordinated smaller models can match the performance of much larger ones. Paper https://lnkd.in/g6ag9rP4 — Join thousands of world-class researchers and engineers from Google, Stanford, OpenAI, and Meta staying ahead on AI http://aitidbits.ai

  • Ver perfil de Marija Butkovic

    Women’s health thought leader - Jury member at European Innovation Council - Founder and CEO of Women of Wearables - Consultant, entrepreneur, advisor - Ex Forbes

    37.205 seguidores

    'Advances in biomonitoring technologies for women’s health' article, published in Nature Magazine, review addresses the long-standing bias in biomedical research and healthcare toward male populations, which has resulted in women (and transgender individuals) being underrepresented in studies, diagnostic norms, and device design. The review explores applications of wearables and biosensors across multiple domains of women’s health, including fertility, pregnancy and maternal health, hormonal monitoring, vaginal infections, gynecologic and breast cancers, and osteoporosis. 📌 For example, devices that track basal body temperature, sweat biomarkers, or hormonal shifts can help with ovulation tracking and fertility. 📌 In pregnancy, smart textiles, abdominal sensors, and wearable ECG/uterine contraction monitors are being developed to continuously monitor maternal and fetal biomarkers. 📌 On the diagnostic side, innovations in point-of-care assays and microfluidic devices are being adapted to detect vaginal pathogens (e.g. via pH, enzymatic markers, or nucleic acid amplification) and early signals of gynecologic cancers (liquid biopsy, micro-exosome capture, multifunctional immunosensors). The authors argue that this gap contributes to delays in diagnosis, suboptimal treatments, and systemic inequities in women’s health. They survey emerging technologies—especially wearable sensors, point-of-care diagnostics, and AI/ML tools—that can help close that gap by enabling continuous, non-invasive biomonitoring tailored to female physiology. However, the authors underscore significant barriers and challenges to adoption. Many of the devices are still in prototype or small-scale testing stages and lack validation in diverse, large populations, especially in low-resource settings. Usability, user compliance, comfort, data interpretation, cost, and integration with clinical workflows are major hurdles. In addition, socioeconomic and digital divides—such as access to internet, smartphones, and health literacy—can limit uptake among marginalized groups. The review also discusses how AI and machine learning could amplify the impact of biomonitoring by improving predictive accuracy and pattern recognition, though models must be trained on more balanced, representative datasets to avoid reinforcing bias. Find out more via link 🔗 https://lnkd.in/d-xh9R6m #femtech #womenshealth #innovation #biomonitoring #biomarkers

  • Ver perfil de Catherine Breslin

    CTO and co-founder LichenAI | AI Scientist, Advisor & Coach | Former Amazon Alexa, Cambridge University

    6.413 seguidores

    Can LLMs generate expert-level research ideas? This paper compares NLP research ideas generated by LLMs and by expert researchers across a range of subtopics like multilingual NLP, mitigating hallucinations & reducing social bias in LLM outputs. The authors used a RAG pipeline, where papers relevant to each subtopic were retrieved from Semantic Scholar and automatically ranked. A selection of the retrieved paper abstracts were provided to the LLM via the prompt. The LLM was then prompted to generate many ideas, and duplicate ideas were removed using semantic similarity scores. As a final step, an LLM then reranked all the ideas to find the best among them. This automated reranker was built using publicly available review data. In parallel, expert NLP researchers were asked to propose research ideas in the same subtopics. The written format of each idea was standardised via a template, and then an LLM further edited the style of the text to avoid differences in writing style affecting people’s judgement. The human & LLM generated ideas were manually reviewed using a review template from NLP conferences. With blind review of the ideas, LLM-generated ideas were judged as more novel but less feasible than those generated by people. 

  • Ver perfil de João Bocas
    João Bocas João Bocas é um Influencer

    Founder & CEO at B | Global Speaker

    42.555 seguidores

    🧬 𝗬𝗼𝘂𝗿 𝗦𝗺𝗮𝗿𝘁𝘄𝗮𝘁𝗰𝗵 𝗠𝗶𝗴𝗵𝘁 𝗞𝗻𝗼𝘄 𝗠𝗼𝗿𝗲 𝗧𝗵𝗮𝗻 𝗬𝗼𝘂 𝗧𝗵𝗶𝗻𝗸 What if your wearable could tell you not just how many steps you’ve taken, but how fast you’re aging? A fascinating new study in Nature Communications introduces 𝗣𝗽𝗴𝗔𝗴𝗲, a “wearable-based aging clock” that uses simple PPG (photoplethysmography) signals from consumer devices like smartwatches to estimate your 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗮𝗴𝗲. Here’s why this is a game changer 👇 Researchers found that this digital aging clock can predict a person’s age with remarkable accuracy , within about 2–3 years on average. But the real breakthrough lies in the “𝗮𝗴𝗲 𝗴𝗮𝗽” , the difference between your predicted (biological) age and your actual chronological age. That gap turned out to be a powerful health indicator. People with an older PpgAge gap had higher risks of 𝗵𝗲𝗮𝗿𝘁 𝗱𝗶𝘀𝗲𝗮𝘀𝗲, 𝗱𝗶𝗮𝗯𝗲𝘁𝗲𝘀, 𝗵𝗲𝗮𝗿𝘁 𝗳𝗮𝗶𝗹𝘂𝗿𝗲, 𝗮𝗻𝗱 𝗼𝘁𝗵𝗲𝗿 𝗺𝗲𝘁𝗮𝗯𝗼𝗹𝗶𝗰 𝗰𝗼𝗻𝗱𝗶𝘁𝗶𝗼𝗻𝘀. Even after accounting for traditional risk factors, the signal held up. It didn’t stop there , lifestyle factors also showed up clearly: 💨 Smokers, poor sleepers, and low-activity individuals tended to have a higher (older) age gap. 🏃♂️ Meanwhile, those who exercised regularly and slept better tended to appear biologically younger. Perhaps most impressively, the model was dynamic. It detected subtle physiological changes like during pregnancy or after cardiac events , suggesting real-time responsiveness to body changes. We’re still early in this space, and it’s not without limitations , self-reported data, specific populations, and no proven causality yet. But this work clearly shows how 𝗱𝗶𝗴𝗶𝘁𝗮𝗹 𝗯𝗶𝗼𝗺𝗮𝗿𝗸𝗲𝗿𝘀 from everyday wearables are becoming powerful tools in predictive health and longevity. The future of health isn’t just about diagnosis , it’s about 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗼𝘂𝘀, 𝗿𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝗶𝗻𝘀𝗶𝗴𝗵𝘁 into how your body is truly aging. 🔗 Source: Nature Communications – “A wearable-based aging clock associates with disease and behavior” https://lnkd.in/eFW_739q #DigitalHealth #WearableTechnology #Longevity #Innovation #HealthTech #AIinHealthcare

  • Ver perfil de Kuldeep Singh Sidhu

    Senior Data Scientist @ Walmart | BITS Pilani

    15.932 seguidores

    Exciting breakthrough in LLM Research: A comprehensive survey reveals that Large Language Models (LLMs) are proving to be highly effective embedding models, marking a significant shift from traditional encoder-only models like BERT to decoder-only architectures. The research, led by scholars from Beihang University, University of Technology Sydney, and other prestigious institutions, demonstrates two primary approaches for deriving embeddings from LLMs: >> Direct Prompting Strategy • Leverages LLMs' instruction-following capabilities to generate topic-specific embeddings • Utilizes contextual representations for enhanced semantic understanding • Implements prompt engineering techniques for optimal embedding generation >> Data-Centric Tuning Approach • Employs supervised contrastive learning with carefully curated datasets • Incorporates multi-task learning frameworks for improved generalization • Utilizes knowledge distillation from cross-encoder models for enhanced performance >> Advanced Implementation Details The research reveals sophisticated techniques including: • Bidirectional contextualization for enhanced semantic capture • Low-rank adaptation for efficient parameter tuning • Integration of both dense and sparse embedding approaches • Implementation of innovative pooling strategies for token aggregation >> Performance Insights The study demonstrates remarkable improvements over traditional models: • Superior performance in classification, clustering, and retrieval tasks • Enhanced capability in handling long-context dependencies • Improved cross-lingual representation capabilities • Better scalability with model size and training data This groundbreaking research opens new possibilities for applications in information retrieval, natural language processing, and recommendation systems.

  • Ver perfil de Gustavo Monnerat

    Deputy Editor @The Lancet - Americas | PhD & MBA | Digital and Global Health | AI & Evidence Systems in Healthcare

    17.485 seguidores

    🚨 New AI Tool for Scientific Literature Research Researchers have introduced OpenScholar, an AI system designed specifically to synthesize scientific literature across millions of papers. Key points: → Traditional large language models often fabricate citations → OpenScholar retrieves real papers first, then writes answers grounded in evidence → It uses an iterative self-feedback loop to refine responses → Built on retrieval from 45M open-access papers Instead of generating answers purely from model memory, this approach combines retrieval, verification, and structured synthesis, a relevant aspect for evidence-based workflows. I’m looking forward to testing this tool and comparing it with newer frontier models and LLMs connected directly to PubMed or other scientific databases. Ref: Asai et al. Synthesizing scientific literature with retrieval-augmented language models. Nature 2026 👉 What do you think: will specialized retrieval systems outperform general-purpose LLMs for scientific research?

  • When I talk to my colleagues and graduate students about how they are using AI tools, I realized that they are missing out on some important use cases that I've found extremely valuable. I wanted to share some of these below and look forward to hearing your thoughts on other unconventional ways you've applied these tools! ✅ Iterative Proposal Refinement – Used ChatGPT to evaluate a revised grant proposal in the context of reviewer comments, identifying gaps, strengthening arguments, and ensuring all weaknesses were addressed. This mimics an outside reviewer’s perspective before submission. ✅ Logic and Flow Checks – AI can analyze argument coherence, detect missing connections, and suggest clearer phrasing in technical documents, making research papers and proposals more compelling and concise. I will prompt to ask for what information is missing to enhance understanding or to identify areas that were unclear and need more explanation. ✅ Cutting the Fluff – Academics love long paragraphs, but reviewers don’t. I ask the LLMs to identify areas of redundancy or areas of varying detail between different parts of a proposal. ✅ Comparative Feedback Analysis – Given multiple drafts, ChatGPT can compare versions, pinpointing what improved and what still needs work—saving time in manual cross-referencing. ✅ Visualization Gaps & Idea Generation – Beyond writing, LLMs can help brainstorm visualization strategies, high priority areas where figures can benefit understanding, or suggest charts or tables to ease understanding. Happy to share prompting strategies I've been using that have been successful - please feel free to leave a comment. 💡 How are you using LLMs in your research? Would love to hear about unconventional ways you've integrated AI tools into your academic workflow!

Conhecer categorias