Kiveton Pharmacy

The Rise of Small Language Models SLMs vs LLMs

The Rise of Small Language Models

small language model

Unlike their larger counterparts, GPT-4 and LlaMa 2, which boast billions, and sometimes trillions of parameters, SLMs operate on a much smaller scale, typically encompassing thousands to a few million parameters. Mistral, as detailed on their documentation site, wants to push forward and become a leader in the open-source community. The company’s work exemplifies the philosophy that advanced AI should be within reach of everyone. Currently, there are three types of access to their LLMs, through API, could-based deployments, and open source models available on Hugging Face.

Tailored for specific business domains—ranging from IT to Customer Support—SLMs offer targeted, actionable insights, representing a more practical approach for enterprises focused on real-world value over computational prowess. Depending on the number of concurrent users accessing an LLM, the model inference tends to slow down. They also hold the potential to make technology more accessible, particularly for individuals with disabilities, through features like real-time language translation and improved voice recognition. However, since the race behind AI has taken its pace, companies have been engaged in a cut-throat competition of who’s going to make the bigger language model. LLMs demand extensive computational resources, consume a considerable amount of energy, and require substantial memory capacity. If you want to keep up on the latest in language models, and not be left in the dust, then you don’t want to miss the NLP & LLM track as part of ODSC East this April.

Additionally, SLMs offer the flexibility to be fine-tuned for specific languages or dialects, enhancing their effectiveness in niche applications. Microsoft, a frontrunner in this evolving landscape, is actively pursuing advancements in small language models. Their researchers have developed a groundbreaking method to train these models, exemplified by the Phi-2, the latest iteration in the Small Language Model (SLM) series. With a modest 2.7 billion parameters, Phi-2 has demonstrated performance matching models 150 times its size, particularly outperforming GPT-4, a 175-billion parameter model from OpenAI, in conversational tasks. Microsoft’s Phi-2 showcases state-of-the-art common sense, language understanding, and logical reasoning capabilities achieved through carefully curating specialized datasets. These frameworks epitomize the evolving landscape of AI customization, where developers are empowered to create SLMs tailored to specific needs and datasets.

This constant innovation, while exciting, presents challenges in keeping up with the latest advancements and ensuring that deployed models remain state-of-the-art. Additionally, customizing and fine-tuning SLMs to specific enterprise needs can require specialized knowledge and expertise in data science and machine learning, resources that not all organizations may have readily available. Training data, deploying, and maintaining an SLM is considerably less resource-intensive, making it a viable option for smaller enterprises or specific departments within larger organizations. This cost efficiency does not come at the expense of better performance in their domains, SLMs can rival or even surpass the capabilities of larger models.

This functionality has the potential to change how users access and interact with information, streamlining the process. They can undertake tasks such as text generation, question answering, and language translation, though they may have lower accuracy and versatility compared to larger models. These requirements can render LLMs impractical for certain applications, especially those with limited processing power or in environments where energy efficiency is a priority. In the realm of smart devices and the Internet of Things (IoT), SLMs can enhance user interaction by enabling more natural language communication with devices.

The emergence of Large language models such as GPT-4 has been a transformative development in AI. These models have significantly advanced capabilities across various sectors, most notably in areas like content creation, code generation, and language translation, marking a new era in AI’s practical applications. Zephyr is designed not just for efficiency and scalability but also for adaptability, allowing it to be fine-tuned for a wide array of applications that can be focused on domain needs. Its presence underscores the vibrant community of developers and researchers committed to pushing the boundaries of what small, open-source language models can achieve. The realm of artificial intelligence is vast, with its capabilities stretching across numerous sectors and applications. Among these, Small Language Models (SLMs) have carved a niche, offering a blend of efficiency, versatility, and innovative integration possibilities, particularly with Emotion AI.

The broad spectrum of applications highlights the adaptability and immense potential of Small Language Models, enabling businesses to harness their capabilities across industries and diverse use cases. A notable benefit of SLMs is their capability to process data locally, making them particularly valuable for Internet of Things (IoT) edge devices and enterprises bound by stringent privacy and security regulations. On the flip side, the increased efficiency and agility of SLMs may translate to slightly reduced language processing abilities, depending on the benchmarks the model is being measured against. As businesses continue to navigate the complexities of generative AI, Small Language Models are emerging as a promising solution that balances capability with practicality. They represent a key development in AI’s evolution and offer enterprises the ability to harness the power of AI in a more controlled, efficient, and tailored manner.

The journey through the landscape of SLMs underscores a pivotal shift in the field of artificial intelligence. As we have explored, lesser-sized language models emerge as a critical innovation, addressing the need for more tailored, efficient, and sustainable AI solutions. Their ability to provide domain-specific expertise, coupled with reduced computational demands, opens up new frontiers in various industries, from healthcare and finance to transportation and customer service.

Apple is Developing AI Chips in Data Centers According to Report

Anticipating the future landscape of AI in enterprises points towards a shift to smaller, specialized models. Many industry experts, including Sam Altman, CEO of OpenAI, predict a trend where companies recognize the practicality of smaller, more cost-effective models for most AI use cases. Altman envisions a future where the dominance of large models diminishes and a collection of smaller models surpasses them in performance. In a discussion at MIT, Altman shared insights suggesting that the reduction in model parameters could be key to achieving superior results. Cohere’s developer-friendly platform enables users to construct SLMs remarkably easily, drawing from either their proprietary training data or imported custom datasets. Offering options with as few as 1 million parameters, Cohere ensures flexibility without compromising on end-to-end privacy compliance.

This responsiveness is complemented by easier model interpretability and debugging, thanks to the simplified decision pathways and reduced parameter space inherent to SLMs. We’ve all asked ChatGPT to write a poem about lemurs or requested that Bard tell a joke about juggling. But these tools are being increasingly adopted in the workplace, where they can automate repetitive tasks and suggest solutions to thorny problems. With our society’s notable decrease in attention span, summarizing lengthy documents can be extremely useful. Its ability to accelerate text generation while maintaining simplicity is especially beneficial for users needing quick summaries or creative content on the go. SLMs also improve data security, addressing increasing concerns about data privacy and protection.

LLMs such as GPT-4 are transforming enterprises with their ability to automate complex tasks like customer service, delivering rapid and human-like responses that enhance user experiences. However, their broad training on diverse datasets from the internet can result in a lack of customization for specific enterprise needs. This generality may lead to gaps in handling industry-specific terminology and nuances, potentially decreasing the effectiveness of their responses. Another significant issue with LLMs is their propensity for hallucinations – generating outputs that seem plausible but are not actually true or factual.

Their simplified architectures enhance interpretability, and their compact size facilitates deployment on mobile devices. The ongoing refinement and innovation in Small Language Model technology will likely play a significant role in shaping the future landscape of enterprise AI solutions. One of the critical advantages of Small Language Models is their potential for enhanced security and privacy. Being smaller and more controllable, they can be deployed on-premises or in private cloud environments, reducing the risk of data leaks and ensuring that sensitive information remains within the control of the organization. This aspect is the small models particularly appealing for industries dealing with highly confidential data, such as finance and healthcare. Increasingly, the answer leans toward the precision and efficiency of Small Language Models (SLMs).

This trend is particularly evident as the industry moves away from the exclusive reliance on large language models (LLMs) towards embracing the potential of SLMs. Compared to their larger counterparts, SLMs require significantly less data to train, consume fewer computational resources, and can be deployed more swiftly. This not only reduces the environmental footprint of deploying AI but also makes cutting-edge technology accessible to smaller businesses and developers.

Another example is CodeGemma, a specialized version of Gemma focused on coding and mathematical reasoning. CodeGemma offers three different models tailored for various coding-related activities, making advanced coding tools more accessible and efficient for developers. Google’s Gemma stands out as a prime example of efficiency and versatility in the realm of small language models. The rise of small language models (SLMs) marks a significant shift towards more accessible and efficient natural language processing (NLP) tools. As AI becomes increasingly integral across various sectors, the demand for versatile, cost-effective, and less resource-intensive models grows.

Bias in the training data and algorithms can lead to unfair, inaccurate or even harmful outputs. As seen with Google Gemini, techniques to make LLMs “safe” and reliable can also reduce their effectiveness. Additionally, the centralized nature of LLMs raises concerns about the concentration of power and control in the hands of a few large tech companies. Recent performance comparisons published by Vellum and HuggingFace suggest that the performance gap between LLMs is quickly narrowing. This trend is particularly evident in specific tasks like multi-choice questions, reasoning and math problems, where the performance differences between the top models are minimal. For instance, in multi-choice questions, Claude 3 Opus, GPT-4 and Gemini Ultra all score above 83%, while in reasoning tasks, Claude 3 Opus, GPT-4, and Gemini 1.5 Pro exceed 92% accuracy.

Microsoft Phi-2

Like other SLMs, Gemma models can run on various everyday devices, like smartphones, tablets or laptops, without needing special hardware or extensive optimization. It is trained on larger data sources and expected to perform well on all domains relatively well as compared to a domain specific SLM. To learn the complex relationships between words and sequential phrases, modern language models such as ChatGPT and BERT rely on the so-called Transformers based deep learning architectures. The general idea of Transformers is to convert text into numerical representations weighed in terms of importance when making sequence predictions.

small language model

Their smaller size allows for lower latency in processing requests, making them ideal for AI customer service, real-time data analysis, and other applications where speed is of the essence. Furthermore, their adaptability facilitates easier and quicker updates to model training, ensuring that the SLM remains effective over time. Advanced techniques such as model compression, knowledge distillation, and transfer learning are pivotal to optimizing Small Language Models. These methods enable SLMs to condense the broad understanding capabilities of larger models into a more focused, domain-specific toolset.

Enter the https://chat.openai.com/ (SLM), a compact and efficient alternative poised to democratize AI for diverse needs. Since the release of Gemma, the trained models have had more than 400,000 downloads last month on HuggingFace, and already a few exciting projects are emerging. For example, Cerule is a powerful image and language model that combines Gemma 2B with Google’s SigLIP, trained on a massive dataset of images and text. Cerule leverages highly efficient data selection techniques, which suggests it can achieve high performance without requiring an extensive amount of data or computation.

Together, they can provide a more holistic understanding of user intent and emotional states, leading to applications that offer unprecedented levels of personalization and empathy. For example, an educational app could adapt its teaching methods based on the student’s mood and engagement level, detected through Emotion AI, and personalized further with content generated by an SLM. Simply put, small language models are like compact cars, while large language models are like luxury SUVs. Both have their advantages and use cases, depending on a task’s specific requirements and constraints.

This article delves into the essence of SLMs, their applications, examples, advantages over larger counterparts, and how they dovetail with Emotion AI to revolutionize user experiences. You can develop efficient and effective small language models tailored to your specific requirements by carefully considering these factors and making informed decisions during the implementation process. To start the process of running a language model on your local CPU, it’s essential to establish the right environment. This involves installing the necessary libraries and dependencies, particularly focusing on Python-based ones such as TensorFlow or PyTorch.

This includes ongoing monitoring, adaptation to evolving data and use cases, prompt bug fixes, and regular software updates. With our proficiency in integrating SLMs into diverse enterprise systems, we prioritize a seamless integration process to minimize disruptions. The entertainment industry is undergoing a transformative shift, with SLMs playing a central role in reshaping creative processes and enhancing user engagement.

Their application is transformative, aiding in the summarization of patient records, offering diagnostic suggestions from symptom descriptions, and staying current with medical research through summarizing new publications. Their specialized training allows for an in-depth understanding of medical context and terminology, crucial in a field where accuracy is directly linked to patient outcomes. In conclusion, while Small Language Models offer a promising alternative to the one-size-fits-all approach of Large Language Models, they come with their own set of benefits and limitations. Understanding these will be crucial for organizations looking to leverage SLMs effectively, ensuring that they can harness the potential of AI in a way that is both efficient and aligned with their specific operational needs.

In conclusion, small language models represent a compelling frontier in natural language processing (NLP), offering versatile solutions with significantly reduced computational demands. Their compact size makes them accessible to a broader audience, including researchers, developers, and enthusiasts, but also opens up new avenues for innovation and exploration in NLP applications. However, the efficacy of these models depends not only on their size but also on their ability to maintain performance metrics comparable to larger counterparts. The impressive power of large language models (LLMs) has evolved substantially during the last couple of years.

The company has created a platform known as Transformers, which offers a range of pre-trained SLMs and tools for fine-tuning and deploying these models. This platform serves as a hub for researchers and developers, enabling collaboration and knowledge sharing. It expedites the advancement of lesser-sized language models by providing necessary tools and resources, thereby fostering innovation in this field. In artificial intelligence, Large Language Models (LLMs) and Small Language Models (SLMs) represent two distinct approaches, each tailored to specific needs and constraints. While LLMs, exemplified by GPT-4 and similar giants, showcase the height of language processing with vast parameters, SLMs operate on a more modest scale, offering practical solutions for resource-limited environments. On the contrary, SLMs are trained on a more focused dataset, tailored to the unique needs of individual enterprises.

Developers use ChatGPT to write complete program functions – assuming they can specify the requirements and limitations via the text user prompt adequately. Ada is one AI startup tackling customer experience— Ada allows customer service teams of any size to build no-code chat bots that can interact with customers on nearly any platform and in nearly any language. Meeting customers where they are, whenever they like is a huge advantage of AI-enabled customer experience that all companies, large and small, should leverage. Ultimately, the future will provide privacy first, instead of sending all the data to an AI model provider.

Future of AI – Multi-Modal Large Language Models (MM-LLM).

Small Language Models are scaled-down versions of their larger AI model counterparts, designed to understand, generate, and interpret human language. Despite their compact size, SLMs pack a potent punch, offering impressive language processing capabilities with a fraction of the resources required by larger models. Their design focuses on achieving optimal performance in specific tasks or under constrained operational conditions, making them highly efficient and versatile.

By analyzing the student’s responses and learning pace, the SLM can adjust the difficulty level and focus areas, offering a customized learning journey. Imagine an SLM-powered educational platform that adapts its teaching strategy based on the student’s strengths and weaknesses, making learning more engaging and efficient. These models offer businesses a unique opportunity to unlock deeper insights, streamline workflows, and achieve a competitive edge. However, building and implementing an effective SLM requires expertise, resources, and a strategic approach.

small language model

Clem Delangue, CEO of the AI startup HuggingFace, suggested that up to 99% of use cases could be addressed using SLMs, and predicted 2024 will be the year of the SLM. HuggingFace, whose platform enables developers to build, train and deploy machine learning models, announced a strategic partnership with Google earlier this year. The companies have subsequently integrated HuggingFace into Google’s Vertex AI, allowing developers to quickly deploy thousands of models through the Google Vertex Model Garden. Training an SLM in-house with this knowledge and fine-tuned for internal use can serve as an intelligent agent for domain-specific use cases in highly regulated and specialized industries. The smaller model size of the SLM means that users can run the model on their local machines and still generate data within acceptable time. They may lack holistic contextual information from all multiple knowledge domains but are likely to excel in their chosen domain.

In conclusion, compact language models stand not just as a testament to human ingenuity in AI development but also as a beacon guiding us toward a more efficient, specialized, and sustainable future in artificial intelligence. As the AI community continues to collaborate and innovate, the future of lesser-sized language models is bright and promising. Their versatility and adaptability make them well-suited to a world where efficiency and specificity are increasingly valued. However, it’s crucial to navigate their limitations wisely, acknowledging the challenges in training, deployment, and context comprehension. Small Language Models stand at the forefront of a shift towards more efficient, accessible, and human-centric applications of AI technology.

If you’ve ever utilized Copilot to tackle intricate queries, you’ve witnessed the prowess of large language models. These models demand substantial computing resources to operate efficiently, making the emergence of small language models a significant breakthrough. Small language models’ capacity to process billions or even trillions of operations per second on innumerable parameters enables unmatched help for human needs.

They understand and can generate human-like text due to the patterns and information they were trained on. With significantly fewer parameters (ranging from millions to a few billion), they require less computational power, making them ideal for deployment on mobile devices and resource-constrained environments. Their efficiency, accessibility, and customization capabilities make them a valuable tool for developers and researchers across various domains.

But despite their considerable capabilities, LLMs can nevertheless present some significant disadvantages. Their sheer size often means that they require hefty computational resources and energy to run, which can preclude them from being used by smaller organizations that might not have the deep pockets to bankroll such operations. Micro Language Models also called Micro LLMs serve as another practical application of Small Language Models, tailored for AI customer service. These models are fine-tuned to understand the nuances of customer interactions, product details, and company policies, thereby providing accurate and relevant responses to customer inquiries. A tailored large language model in healthcare, fine-tuned from broader base models, are specialized to process and generate information related to medical terminologies, procedures, and patient care.

LLMs vs. SLMs: The Differences in Large & Small Language Models

As the AI community continues to explore the potential of small language models, the advantages of faster development cycles, improved efficiency, and the ability to tailor models to specific needs become increasingly apparent. SLMs are poised to democratize AI access and drive innovation across industries by enabling cost-effective and targeted solutions. The deployment of SLMs at the edge opens up new possibilities for real-time, personalized, and secure applications in various sectors, such as finance, entertainment, automotive systems, education, e-commerce and healthcare. Hugging Face, along with other organizations, is playing a pivotal role in advancing the development and deployment of SLMs.

  • Hugging Face, along with other organizations, is playing a pivotal role in advancing the development and deployment of SLMs.
  • This approach ensures that your SLM comprehends your language, grasps your context, and delivers actionable results.
  • CodeGemma offers three different models tailored for various coding-related activities, making advanced coding tools more accessible and efficient for developers.
  • Small language models’ capacity to process billions or even trillions of operations per second on innumerable parameters enables unmatched help for human needs.

This adaptability makes them particularly appealing for companies seeking language models optimized for specialized domains or industries, where precision is needed. Some of the most illustrative demos I’ve witnessed include Google Duplex technology, where AI is able to schedule a telephone appointment in a human-like manner. This is possible thanks to the use of speech recognition, natural language understanding, and text-to-speech. Meta’s Llama 2 7B is another major player in the evolving landscape of AI, balancing the scales between performance and accessibility.

Future-proofing with small language models

This makes the training process extremely resource-intensive, and the computational power and energy consumption required to train and run LLMs are staggering. This leads to high costs, making it difficult for smaller organizations or individuals to engage in core LLM development. At an MIT event last year, OpenAI CEO Sam Altman stated the cost of training GPT-4 was at least $100M.

This local processing can further improve data security and reduce the risk of exposure during data transfer. The complexity of tools and techniques required to work with LLMs also presents a steep learning curve for developers, further limiting accessibility. There is a long cycle time for developers, from training to building and deploying models, which slows down development and experimentation. A recent paper from the University of Cambridge shows companies can spend 90 days or longer deploying a single machine learning (ML) model. Another important use case of engineering language models is to eliminate bias against unwanted language outcomes such as hate speech and discrimination.

The model’s code and checkpoints are available on GitHub, enabling the wider AI community to learn from, improve upon, and incorporate this model into their projects. The integration of SLMs with Emotion AI opens up exciting avenues for creating more intuitive and responsive applications. Emotion AI, which interprets human emotions through data inputs such as facial expressions, voice intonations, and behavioral patterns, can greatly benefit from the linguistic understanding and generation capabilities of SLMs.

Thus, while lesser-sized language models can outperform LLMs in certain scenarios, they may not always be the best choice for every application. Because they have a more focused scope and require less data, they can be fine-tuned for particular domains or tasks more easily than large, general-purpose models. This customization enables companies to create SLMs that are highly effective for their specific needs, such as sentiment analysis, named entity recognition, or domain-specific question answering. The specialized nature of SLMs can lead to improved performance and efficiency in these targeted applications compared to using a more general model. You can foun additiona information about ai customer service and artificial intelligence and NLP. As the performance gap continues to close and more models demonstrate competitive results, it raises the question of whether LLMs are indeed starting to plateau. In IoT devices, small language models enable functions like voice recognition, natural language processing, and personalized assistance without heavy reliance on cloud services.

small language model

This setup lowers delay and reduces reliance on central servers, improving cost-efficiency and responsiveness. This makes SLMs not only quicker and cheaper to train but also more efficient to deploy, especially on smaller devices or in environments with limited computational resources. Furthermore, SLMs’ ability to be fine-tuned for specific applications allows for greater flexibility and customization, catering to the unique needs of businesses and researchers alike.

Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models – Ars Technica

Microsoft’s Phi-3 shows the surprising power of small, locally run AI language models.

Posted: Tue, 23 Apr 2024 07:00:00 GMT [source]

Unlike traditional chatbots that rely on pre-defined scripts, SLM-powered bots can understand and generate human-like responses, offering a personalized and conversational experience. For instance, a retail company could implement an SLM chatbot that not only answers FAQs about products and policies but also provides Chat PG styling advice based on the customer’s purchase history and preferences. From generating creative content to assisting with tasks, our models offer efficiency and innovation in a compact package. As language models evolve to become more versatile and powerful, it seems that going small may be the best way to go.

According to Microsoft, the efficiency of the transformer-based Phi-2 makes it an ideal choice for researchers who want to improve safety, interpretability and ethical development of AI models. With the burgeoning interest in SLMs, the market has seen an influx of various models, each claiming superiority in certain aspects. However, LLM evaluation and selecting the appropriate Small Language Model for a specific application can be daunting. Performance metrics can be misleading, and without a deep understanding of the model size underlying technology, businesses may struggle to choose the most effective model for their needs. Despite the advanced capabilities of LLMs, they pose challenges including potential biases, the production of factually incorrect outputs, and significant infrastructure costs. SLMs, in contrast, are more cost-effective and easier to manage, offering benefits like lower latency and adaptability that are critical for real-time applications such as chatbots.

Looking at the market, I expect to see new, improved models this year that will speed up research and innovation. As these models continue to evolve, their potential applications in enhancing personal life are vast and ever-growing. Similarly, Google has contributed to the progress of lesser-sized language models by creating TensorFlow, a platform that provides extensive resources and tools for the development small language model and deployment of these models. Both Hugging Face’s Transformers and Google’s TensorFlow facilitate the ongoing improvements in SLMs, thereby catalyzing their adoption and versatility in various applications. Despite these advantages, it’s essential to remember that the effectiveness of an SLM largely depends on its training and fine-tuning process, as well as the specific task it’s designed to handle.

With Cohere, developers can seamlessly navigate the complexities of SLM construction while prioritizing data privacy. In summary, the versatile applications of SLMs across these industries illustrate the immense potential for transformative impact, driving efficiency, personalization, and improved user experiences. As SLM continues to evolve, its role in shaping the future of various sectors becomes increasingly prominent. Imagine a world where intelligent assistants reside not in the cloud but on your phone, seamlessly understanding your needs and responding with lightning speed. This isn’t science fiction; it’s the promise of small language models (SLMs), a rapidly evolving field with the potential to transform how we interact with technology.

This article delves deeper into the realm of small language models, distinguishing them from their larger counterparts, LLMs, and highlighting the growing interest in them among enterprises. The article covers the advantages of SLMs, their diverse use cases, applications across industries, development methods, advanced frameworks for crafting tailored SLMs, critical implementation considerations, and more. Due to their training on smaller datasets, SLMs possess more constrained knowledge bases compared to their Large Language Model (LLM) counterparts. Additionally, their understanding of language and context tends to be more limited, potentially resulting in less accurate and nuanced responses when compared to larger models. Small language models shine in edge computing environments, where data processing occurs virtually at the data source. Deployed on edge devices such as routers, gateways, or edge servers, they can execute language-related tasks in real time.

Terms and Conditions of Sale

1 These Terms and Conditions of Sale

1.1 What these terms cover. These are the terms and conditions on which we supply products to you, whether these are goods or services on the www.kivetonpharmacy.com website.

1.2 Why you should read them. Please read these terms carefully before you submit your order to us. These terms tell you who we are, how we will provide products and to you, how you and we may change or end the contract, what to do if there is a problem and other important information. If you think that there is a mistake in these terms, please contact us to discuss.

2 Information about us and how to contact us

2.1 Who we are. We are OTC direct services ltd a company registered in England and Wales under the trading name of Kiveon deliver pharmacy. Our company registration number is 12239643 and our registered office address is: 43 Forthill road Sheffield S9 1BA, United Kingdom. Our registered VAT number is GB373807574.

2.2 How to contact us. You can contact us using the following email address pharmacy.fq716@nhs.net or by telephone using the following number 0114 698 0161.

2.3 How we may contact you. If we have to contact you we will do so by telephone or by writing to you at the email address or postal address you provided to us in your order.

3 Our contract with you

3.1 How we will accept your order. Our acceptance of your order will take place when we email you to accept it, at which point a contract will come into existence between you and us.

3.2 If we cannot accept your order. If we are unable to accept your order, we will inform you of this and will not charge you for the product. This might be because the product is out of stock, because of unexpected limits on our resources which we could not reasonably plan for, because it may not be clinically appropriate to supply the product you have ordered, because we have identified an error in the price or description of the product or because we are unable to meet a delivery deadline.

3.3 Your order number. We will assign an order number to your order and tell you what it is when we accept your order. It will help us if you can tell us the order number whenever you contact us about your order.

3.4 We only sell to the UK and EU. Our website is solely for the sale or supply of our products in the UK and EU. Unfortunately, we do not deliver to addresses outside the UK or EU.

3.5 English Language. We only provide treatment and advice in English and it will be your responsibility to ensure that you fully understand our advice.

3.6 Emergencies. You must not use our website or services for emergencies. In emergencies, you should consult your local doctor or the emergency department of your nearest hospital.

4 Our products

4.1 Product packaging may vary. The packaging of the product may vary from that shown in images on our website.

5 Your right to make changes

If you wish to make a change to the product you have ordered please contact us. We will let you know if the change is possible. If it is possible we will let you know about any changes to the price of the product, the timing of supply or anything else which would be necessary as a result of your requested change and ask you to confirm whether you wish to go ahead with the change. If we cannot make the change or the consequences of making the change are unacceptable to you, you may want to end the contract (please refer to clause 8).

6 Our right to make changes

Changes to the products and/or services . We may change the product or our services offered on the website at any time which may include amendments required to reflect changes in relevant laws and regulatory requirements.

7 Providing the products

7.1 Delivery costs. The costs of delivery will be as displayed to you on our website.

7.2 When we will provide the products. We will deliver the products that are the subject of your order as soon as reasonably possible and in any event within 30 days after the day on which we accept your order.

7.3 We are not responsible for delays outside our control. If our supply of the products is delayed by an event outside our control then we will contact you as soon as possible to let you know and we will take steps to minimise the effect of the delay. Provided we do this we will not be liable for delays caused by the event, but if there is a risk of substantial delay you may contact us to end the contract and receive a refund for any products you have paid for but not received.

7.4 If you are not at home when the product is delivered. If no one is available at your address to take delivery, our delivery company will leave you a note informing you of how to rearrange delivery or collect the products from a local depot. We will only post the products through the letterbox without the need for a signature where you have specifically selected this option during the ordering process.

7.5 If you do not re-arrange delivery. After a failed delivery to you and you do not re-arrange delivery or collect them from a delivery depot we will contact you for further instructions. If, despite our reasonable efforts, we are unable to contact you or re-arrange delivery or collection we may end the contract.

7.6 Automatic delivery upgrades. In some instances, we may have to automatically upgrade your delivery method; if the shipment does not comply with the size constraints of your chosen delivery method or your chosen delivery method does not have adequate insurances. In these instances, we will not charge you any more for the upgrade.

7.7 Combination of orders. In the event that two or more orders are placed to the same address by the same account on the same day, we may combine the orders into one single delivery. This decision is ultimately at the discretion of the pharmacist on duty. You will not be charged any extra fees, nor will you be re-reimbursed for the cost of the delivery.

7.8 When you become responsible for the goods. The products which we deliver to you will be your responsibility from the time we deliver the product to the address you gave us.

7.9 When you own goods. You own a product which is goods once we have received payment in full.

7.10 What will happen if you do not give required information to us . We may need certain information from you so that we can supply the products to you, for example, the information you are asked to give us for the medical assessment prior to placing your order with us. If we require additional information, we will contact you to ask for this information. We will not be responsible for liability arising out of supplying the products late or not supplying any part of them if this is caused by you not giving us the information we need within a reasonable time of us asking for it. We will also not be responsible for liability arising as a result of any incorrect or misleading information you have given us.

7.11 Reasons we may suspend the supply of products to you. We may have to suspend the supply of a product after an order has been accepted by us due to a change in relevant laws and regulatory requirements or where the supply of the product would not be clinically appropriate.

7.12 Your rights if we suspend the supply of products. We will contact you in advance to tell you we will be suspending supply of the product if we have already accepted your order. You may contact us to end the contract for a product if we suspend it and we will refund any sums you have paid in advance for the products that have not been supplied to you.

7.13 No right of re-supply. You agree that you will not sell, supply or make available the products we have supplied to you to any other person.

7.14 Mental Capacity Act 2005. You confirm that consent to care and treatment from our website has not been sought in line with the Mental Capacity Act 2005.

7.15 Testing Kits. In relation to any testing kit purchased through our website, you acknowledge that neither we nor the manufacturer of the test kits or the supplier of the testing services are able to guarantee the absolute effectiveness or accuracy of the test kit. Therefore, you acknowledge and accept that there may be instances where results obtained from a test kit may be inaccurate including the occurrence of a false positive or false negative result. Subject to the provisions in clause 12, we will not be liable for any inaccurate or other information arising from the results of a test kit and that you should seek medical advice from an appropriate healthcare professional if you think you may be suffering from a medical condition or have any specific queries on medical matters.

8 Your rights to end the contract

8.1 Ending the contract because of something we have done . You may be able to end a contract for a reason set out at (a) to (c) below. Where you decide to end the contract, the contract will end immediately and we will refund you in full for any products which have not been provided. The reasons are:

(a) we have told you about an upcoming change to the product (please refer to clause 6);

(b) we have told you about an error in the price or description of the product you have ordered and you do not wish to proceed;

(c) there is a risk that supply of the products may be significantly delayed because of events outside our control.

8.2 When you don’t have the right to change your mind. You will not be able to return any medicines which you have ordered if the return is not for any of the reasons set out in clause 8.1 above. Please note that the Consumer contracts (Information, Cancellation and Additional Charges) Regulations 2013 do not apply to the sale of medicinal products under a prescription.

8.3 How we will refund you. If you are exercising your right to end a contract based on the reasons set out in clause 8.1, we will refund you the price you paid for any products which have not been despatched to you, by the method you used for payment.

9 Our rights to end the contract

9.1 We may end the contract if you break it . We may end the contract for a product at any time by writing to you if:

(a) you do not, within a reasonable time of us asking for it, provide us with information that is necessary for us to provide the products;

(b) you do not, within a reasonable time, allow us to deliver the products to you.

10 If there is a problem with the product

10.1 How to tell us about problems. If you have any questions or complaints about the product, please contact us.

10.2 Summary of your legal rights . We are under a legal duty to supply products that are in conformity with this contract. Nothing in these terms will affect your legal rights.

11 Price and payment

11.1 Where to find the price for the product. The price of the product (which includes VAT) will be the price indicated on the order pages when you placed your order. We take all reasonable care to ensure that the price of the product advised to you is correct. However please see clause 11.3 for what happens if we discover an error in the price of the product you order.

11.2 We will pass on changes in the rate of VAT. If the rate of VAT changes between your order date and the date we supply the product, we will adjust the rate of VAT that you pay, unless you have already paid for the product in full before the change in the rate of VAT takes effect.

11.3 What happens if we got the price wrong . It is always possible that, despite our best efforts, some of the products we sell may be incorrectly priced. We will normally check prices before accepting your order so that, where the product’s correct price at your order date is less than our stated price at your order date, we will charge the lower amount. If the product’s correct price at your order date is higher than the price stated to you, we will contact you for your instructions before we accept your order.

11.4 When you must pay and how you must pay . We accept payment with Visa, Mastercard and American Express credit and debit cards. You must pay for the products before we dispatch them. We will not charge your credit or debit card until we dispatch the products to you.

12 Our responsibility for loss or damage suffered by you

12.1 We are responsible to you for foreseeable loss and damage caused by us . If we fail to comply with these terms, we are responsible for loss or damage you suffer that is a foreseeable result of our breaking this contract or our failing to use reasonable care and skill, but we are not responsible for any loss or damage that is not foreseeable. Loss or damage is foreseeable if either it is obvious that it will happen or if, at the time the contract was made, both we and you knew it might happen, for example, if you discussed it with us during the sales process.

12.2 We do not exclude or limit in any way our liability to you where it would be unlawful to do so . This includes liability for death or personal injury caused by our negligence or the negligence of our employees, agents or subcontractors; for fraud or fraudulent misrepresentation; for breach of your legal rights in relation to the products.

12.3 We are not liable for business losses. We only supply the products for domestic and private use. If you use the products for any commercial, business or re-sale purpose we will have no liability to you for any loss of profit, loss of business, loss of sales, loss of revenue, business interruption, business interruption, loss of business opportunity or for any indirect or consequential loss or damage.

13 How we may use your personal information

13.1 How we will use your personal information. We will use the personal information you provide to us:

(a) to supply the products to you;

(b) to process your payment for the products; and

(c) in accordance with our Privacy Policy and/or any other consents for information that you have given us.

14 Other important terms

14.1 We may transfer this agreement to someone else. We may transfer our rights and obligations under these terms to another organisation.

14.2 Nobody else has any rights under this contract. This contract is between you and us. No other person shall have any rights to enforce any of its terms.

14.3 If a court finds part of this contract illegal, the rest will continue in force . Each of the paragraphs of these terms operates separately. If any court or relevant authority decides that any of them are unlawful, the remaining paragraphs will remain in full force and effect.

14.4 Even if we delay in enforcing this contract, we can still enforce it later . If we do not insist immediately that you do anything you are required to do under these terms, or if we delay in taking steps against you in respect of your breaking this contract, that will not mean that you do not have to do those things and it will not prevent us taking steps against you at a later date.

14.5 Which laws apply to this contract and where you may bring legal proceedings . These terms are governed by English law and you can bring legal proceedings in respect of the products in the English courts. If you are a consumer and not resident in the UK, you may in some circumstances be permitted to bring proceedings in the EU member state in which you reside. If you are a business customer, then you agree to the exclusive jurisdiction of the English courts.