send mail to support@abhimanu.com mentioning your email id and mobileno registered with us! if details not recieved
Resend Opt after 60 Sec.
By Loging in you agree to Terms of Services and Privacy Policy
Claim your free MCQ
Please specify
Sorry for the inconvenience but we’re performing some maintenance at the moment. Website can be slow during this phase..
Please verify your mobile number
Login not allowed, Please logout from existing browser
Please update your name
Subscribe to Notifications
Stay updated with the latest Current affairs and other important updates regarding video Lectures, Test Schedules, live sessions etc..
Your Free user account at abhipedia has been created.
Remember, success is a journey, not a destination. Stay motivated and keep moving forward!
Refer & Earn
Enquire Now
My Abhipedia Earning
Kindly Login to view your earning
Support
Context: In collaboration with the BharatGPT ecosystem led by IIT Bombay, Seetha Mahalaxmi Healthcare (SML) has introduced ‘Hanooman,’ a suite of Indic large language models trained across 22 Indian languages.
The group built the ‘Hanooman’ series of Indic language models in collaboration with Seetha Mahalaxmi Healthcare (SML).
It is backed by Reliance Industries Ltd and the Department of Science and Technology.
Hanooman is a series of large language models (LLMs) that can respond in 11 Indian languages like Hindi, Tamil, and Marathi.
However, there are plans to expand to more than 20 languages.
It has been designed to work in four fields, including health care, governance, financial services, and education.
Notably, the series is not just a chatbot. It is a multimodal AI tool, which can generate text, speech, videos and more in multiple Indian languages.
One of the first customised versions is VizzhyGPT, an AI model fine-tuned for healthcare using reams of medical data.
The size of these AI models ranges from 1.5 billion to a whopping 40 billion parameters.
GPTs are a type of large language model (LLM) that use transformer neural networks to generate human-like text.
GPTs are trained on large amounts of unlabelled text data from the internet, enabling them to understand and generate coherent and contextually relevant text.
They can be fine-tuned for specific tasks like: Language generation, Sentiment analysis, Language modelling, Machine translation, Text classification.
GPTs use self-attention mechanisms to focus on different parts of the input text during each processing step.
This allows GPT models to capture more context and improve performance on natural language processing (NLP) tasks.
NLP is the ability of a computer program to understand human language as it is spoken and written -- referred to as natural language.
Large language models use deep learning techniques to process large amounts of text.
They work by processing vast amounts of text, understanding the structure and meaning, and learning from it.
LLMs are trained to identify meanings and relationships between words.
The greater the amount of training data a model is fed, the smarter it gets at understanding and producing text.
The training data is usually large datasets, such as Wikipedia, OpenWebText, and the Common Crawl Corpus.
These contain large amounts of text data, which the models use to understand and generate natural language.
ChatGPT is a state-of-the-art natural language processing (NLP) model developed by OpenAI.
It is a variant of the popular GPT-3 (Generative Pertained Transformer 3) model, which has been trained on a massive amount of text data to generate human-like responses to a given input.
The answers provided by this chatbot are intended to be technical and free of jargon.
It can provide responses that sound like human speech, enabling natural dialogue between the user and the virtual assistant.
By: Shubham Tiwari ProfileResourcesReport error
Access to prime resources
New Courses