
{"id":2125,"date":"2024-05-02T17:35:34","date_gmt":"2024-05-02T09:35:34","guid":{"rendered":"https:\/\/infernews.com\/?p=2125"},"modified":"2024-05-05T02:25:14","modified_gmt":"2024-05-04T18:25:14","slug":"this-is-how-i-run-my-own-custom-models-using-ollama-youtube","status":"publish","type":"post","link":"https:\/\/infernews.com\/blog\/this-is-how-i-run-my-own-custom-models-using-ollama-youtube\/","title":{"rendered":"This is how I run my OWN Custom-Models using OLLAMA &#8211; YouTube"},"content":{"rendered":"<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"This is how I run my OWN Custom-Models using OLLAMA\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_lWJ5eY8C3IU\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlWJ5eY8C3IU%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/lWJ5eY8C3IU\" \/><meta itemprop=\"duration\" content=\"PT22M25S\" \/><meta itemprop=\"uploadDate\" content=\"2024-03-03T18:15:07Z\" \/><\/div><div id=\"lyte_lWJ5eY8C3IU\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlWJ5eY8C3IU%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">This is how I run my OWN Custom-Models using OLLAMA<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/lWJ5eY8C3IU\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FlWJ5eY8C3IU%2F0.jpg\" alt=\"This is how I run my OWN Custom-Models using OLLAMA\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"In this video, we are going to push our own models on Ollama. Specifically, you will learn how to Run ollama models, how to run models not available in model library, host your own models in model library, all in Cloud GPU service. Run custom model on ollama Import hugging face models on ollama Access Codes here: https:\/\/github.com\/PromptEngineer48\/Ollama_custom Websites: https:\/\/ollama.com\/ Let\u2019s do this! Join the AI Revolution! #custom_models #ollama #milestone #AGI #openai #autogen #windows #ollama #ai #llm_selector #auto_llm_selector #localllms #github #streamlit #langchain #qstar #openai #ollama #webui #github #python #llm #largelanguagemodels CHANNEL LINKS: \ud83d\udd75\ufe0f\u200d\u2640\ufe0f Join my Patreon: https:\/\/www.patreon.com\/PromptEngineer975 \u2615 Buy me a coffee: https:\/\/ko-fi.com\/promptengineer \ud83d\udcde Get on a Call with me - Calendly: https:\/\/calendly.com\/prompt-engineer48\/call \u2764\ufe0f Subscribe: https:\/\/www.youtube.com\/@PromptEngineer48 \ud83d\udc80 GitHub Profile: https:\/\/github.com\/PromptEngineer48 \ud83d\udd16 Twitter Profile: https:\/\/twitter.com\/prompt48 TIME STAMPS: 0:00 - Intro 0:58 - Objectives 2:29 - Usefullness 2:48 - What is Ollama? 3:30 - RunPod Intro and Use 4:36 - How to use Ollama? 6:38 - Connect to Jupyter Notebooks in Ollama 7:10 - Log in to Ollama Account 8:21 - Main Section for Code 9:23 - Download models from Huggingface 11:52 - Create a Modelfile in Ollama 14:36 - Creating Custom Ollama Model 17:04 - Ollama Keys 18:22 - Pushing the Models 20:12 - Testing on Local Systems 21:00 - Conclusion \ud83c\udf81Subscribe to my channel: https:\/\/www.youtube.com\/@PromptEngineer48 If you have any questions, comments or suggestions, feel free to comment below. \ud83d\udd14 Don&#039;t forget to hit the bell icon to stay updated on our latest innovations and exciting developments in the world of AI!\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n<p>\u00a0<\/p>\n<p>In this video, we are going to push our own models on Ollama. Specifically, you will learn how to Run ollama models, how to run models not available in model&#8230;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>\u00a0 In this video, we are going to push our own models on Ollama. Specifically, you will learn how to Run ollama models, how to run models not available in model&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"googlesitekit_rrm_CAowvqSiDA:productID":"","footnotes":""},"categories":[23],"tags":[],"class_list":["post-2125","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/posts\/2125","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/comments?post=2125"}],"version-history":[{"count":0,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/posts\/2125\/revisions"}],"wp:attachment":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/media?parent=2125"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/categories?post=2125"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/tags?post=2125"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}