Yahoo India Web Search

Search results

  1. Oct 10, 2024 · Natural Language Processing. Much of the information that can help transform enterprises is locked away in text, like documents, tables, and charts. We’re building advanced AI systems that can parse vast bodies of text to help unlock that data, but also ones flexible enough to be applied to any language problem.

  2. Apr 20, 2023 · The rise of deep generative models. Generative AI refers to deep-learning models that can take raw data — say, all of Wikipedia or the collected works of Rembrandt — and “learn” to generate statistically probable outputs when prompted. At a high level, generative models encode a simplified representation of their training data and draw ...

  3. Abstract. Recently, there has been a surge of interest in applying deep learning on graphs techniques (i.e., Graph Neural Networks (GNNs)) to NLP, and has achieved considerable success in many NLP tasks. Despite these successes, deep learning on graphs for NLP still face many challenges, including automatically transforming textual data into ...

  4. May 18, 2023 · “Integrating TensorFlow optimizations powered by Intel’s oneAPI Deep Neural Network library into the IBM Watson NLP Library for Embed, led to an upwards of 165% improvement in function throughput on text and sentiment classification tasks on Intel fourth-gen Xeon Scalable Processors,” said Bill Higgins, director of development for Watson AI at IBM Research.

  5. Oct 12, 2021 · Neuro-symbolic AI. We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence. By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of human-like symbolic knowledge and reasoning, we're aiming to create a revolution in AI, rather than an evolution.

  6. Aug 30, 2022 · Converting several audio streams into one voice makes it easier for AI to learn. IBM researchers showed that putting the words of multiple speakers into one voice helps AI models pick up the nuances of spoken language. Their algorithm builds on a popular foundation model for speech processing, improving its accuracy and continuing IBM’s ...

  7. Aug 22, 2024 · Hoover is an AI engineer at IBM Research who co-designed the open-source and interactive Transformer Explainer with a team at Georgia Tech, where he’s also studying for a PhD in machine learning. The team’s goal was to give non-experts a hands-on introduction to what goes on under the hood of a transformer-based language model, which learns from large-scale data how to mimic human-generated text.

  8. Nov 7, 2024 · Foundation Models. Foundation models can be applied across domains and tasks. But there are challenges to scalability, and how AI is applied in specific use cases. At IBM Research, we create new foundation models for business, integrating deep domain expertise, a focus on responsible AI, and a commitment to open-source innovation.

  9. Dec 6, 2021 · Code language translation is one of many problems that we strive to address with CodeNet, which we first unveiled back in May. Essentially, CodeNet is a massive dataset that aims to help AI systems learn how to understand and improve code, as well as help developers code more efficiently, and eventually, allow an AI system to code a computer.

  10. Nov 10, 2021 · Today, we’re excited to announce that key areas of these NLP research efforts—including smart document understanding, advanced pattern detection, and advanced customization of NLP models—are being infused into IBM Watson Discovery, a platform applying the latest in AI and NLP to retrieve business-critical insights from documents. These new capabilities include: