
{"id":4693,"date":"2025-02-20T23:33:41","date_gmt":"2025-02-20T15:33:41","guid":{"rendered":"https:\/\/infernews.com\/?page_id=4693"},"modified":"2025-02-25T18:41:51","modified_gmt":"2025-02-25T10:41:51","slug":"all-about-ai","status":"publish","type":"page","link":"https:\/\/infernews.com\/blog\/all-about-ai\/","title":{"rendered":"All about AI!"},"content":{"rendered":"<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"All Machine Learning Models Clearly Explained!\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_0YdpwSYMY6I\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0YdpwSYMY6I%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/0YdpwSYMY6I\" \/><meta itemprop=\"duration\" content=\"PT22M23S\" \/><meta itemprop=\"uploadDate\" content=\"2025-02-02T18:35:04Z\" \/><\/div><div id=\"lyte_0YdpwSYMY6I\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0YdpwSYMY6I%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">All Machine Learning Models Clearly Explained!<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/0YdpwSYMY6I\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F0YdpwSYMY6I%2F0.jpg\" alt=\"All Machine Learning Models Clearly Explained!\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"#ml #machinelearning #ai #artificialintelligence #datascience #regression #classification \ud83d\udd25 In this video, we explain every major Machine Learning algorithm. Regression models: Linear Regression, Polynomial Regression. Classification models: Logistic Regression, Naive Bayes. Models used for both: Decision Tree, Random Forest, Support Vector Machines, K-Nearest Neighbors. Ensembles: Bagging, Boosting, Voting and Stacking. Deep Learning: Fully Connected (Dense) Neural Networks. Unsupervised learning: K-Means clustering and Principal Component Analysis (PCA) dimensionality reduction technique. Heads up! You can&#039;t learn Machine Learning in just 22 minutes, a day, a week or even in a month! It needs a continuous dedication, patience, and consistent effort. I\u2019m here to guide you every step of the way with clear explanations, tips, and resources to make your learning experience easier! Don&#039;t worry if there were concepts that were hard to understand! Keep at it, and you\u2019ll get there. Subscribe and like the video if you found it helpful! Starting with this video, we\u2019ll be posting a quick quiz on our Instagram page to help you review the material and test your understanding! It\u2019s a great way to reinforce what you\u2019ve learned and see how well you\u2019re understanding the concepts. Be sure to follow us on Instagram, keep track of your progress and challenge yourself! Instagram: https:\/\/www.instagram.com\/easyaiforall\/ \ud83d\udd0d Key points covered: 0:00 - Introduction. 0:22 - Linear Regression. 2:00 - Logistic Regression. 3:12 - Naive Bayes. 4:15 - Decision Trees. 6:25 - Random Forests. 7:55 - Support Vector Machines. 10:05 - K-Nearest Neighbors. 12:23 - Ensembles. 12:49 - Ensembles (Bagging). 13:18 - Ensembles (Boosting). 13:55 - Ensembles (Voting). 14:48 - Ensembles (Stacking). 15:55 - Neural Networks. 18:59 - K-Means. 20:58 - Principal Component Analysis. 22:05 - Subscribe to us! \ud83d\udd14 Don&#039;t forget to like, subscribe, and hit the bell icon to stay updated with our latest videos! \ud83e\udd16 Note that we use synthetic generations, such as AI-generated images and voices, to enhance the appeal and engagement of our content. \ud83c\udf10 If you have any questions or topics you want us to cover, leave a comment below. Additionally, share with your thoughts about the content, how do you think we can make them better? Thanks for watching!\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"The moment we stopped understanding AI [AlexNet]\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_UZDiGooFs54\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUZDiGooFs54%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/UZDiGooFs54\" \/><meta itemprop=\"duration\" content=\"PT17M38S\" \/><meta itemprop=\"uploadDate\" content=\"2024-07-01T19:09:21Z\" \/><\/div><div id=\"lyte_UZDiGooFs54\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUZDiGooFs54%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">The moment we stopped understanding AI [AlexNet]<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/UZDiGooFs54\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUZDiGooFs54%2F0.jpg\" alt=\"The moment we stopped understanding AI [AlexNet]\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"Thanks to KiwiCo for sponsoring today&#039;s video! Go to\u00a0https:\/\/www.kiwico.com\/welchlabs and use code\u00a0WELCHLABS\u00a0for\u00a050% off your first month of monthly lines\u00a0and\/or\u00a0for 20% off your first Panda Crate. Activation Atlas Posters! https:\/\/www.welchlabs.com\/resources\/5gtnaauv6nb9lrhoz9cp604padxp5o https:\/\/www.welchlabs.com\/resources\/activation-atlas-poster-mixed5b-13x19 https:\/\/www.welchlabs.com\/resources\/large-activation-atlas-poster-mixed4c-24x36 https:\/\/www.welchlabs.com\/resources\/activation-atlas-poster-mixed4c-13x19 Special thanks to the Patrons: Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti Welch Labs Ad free videos and exclusive perks: https:\/\/www.patreon.com\/welchlabs Watch on TikTok: https:\/\/www.tiktok.com\/@welchlabs Learn More or Contact: https:\/\/www.welchlabs.com\/ Instagram: https:\/\/www.instagram.com\/welchlabs X: https:\/\/twitter.com\/welchlabs References AlexNet Paper https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2012\/file\/c399862d3b9d6b76c8436e924a68c45b-Paper.pdf Original Activation Atlas Article- explore here - Great interactive Atlas! https:\/\/distill.pub\/2019\/activation-atlas\/ Carter, et al., &quot;Activation Atlas&quot;, Distill, 2019. Feature Visualization Article: https:\/\/distill.pub\/2017\/feature-visualization\/ `Olah, et al., &quot;Feature Visualization&quot;, Distill, 2017.` Great LLM Explainability work: https:\/\/transformer-circuits.pub\/2024\/scaling-monosemanticity\/index.html Templeton, et al., &quot;Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet&quot;, Transformer Circuits Thread, 2024. \u201cDeep Visualization Toolbox&quot; by Jason Yosinski video inspired many visuals: https:\/\/www.youtube.com\/watch?v=AgkfIQ4IGaM Great LLM\/GPT Intro paper https:\/\/arxiv.org\/pdf\/2304.10557 3B1Bs GPT Videos are excellent, as always: https:\/\/www.youtube.com\/watch?v=eMlx5fFNoYc https:\/\/www.youtube.com\/watch?v=wjZofJX0v4M Andrej Kerpathy&#039;s walkthrough is amazing: https:\/\/www.youtube.com\/watch?v=kCc8FmEb1nY Goodfellow\u2019s Deep Learning Book https:\/\/www.deeplearningbook.org\/ OpenAI\u2019s 10,000 V100 GPU cluster (1+ exaflop) https:\/\/news.microsoft.com\/source\/features\/innovation\/openai-azure-supercomputer\/ GPT-3 size, etc: Language Models are Few-Shot Learners, Brown et al, 2020. Unique token count for ChatGPT: https:\/\/cookbook.openai.com\/examples\/how_to_count_tokens_with_tiktoken GPT-4 training size etc, speculative: https:\/\/patmcguinness.substack.com\/p\/gpt-4-details-revealed https:\/\/www.semianalysis.com\/p\/gpt-4-architecture-infrastructure Historical Neural Network Videos https:\/\/www.youtube.com\/watch?v=FwFduRA_L6Q https:\/\/www.youtube.com\/watch?v=cNxadbrN_aI Errata 1:40 should be: &quot;word fragment is appended to the end of the original input&quot;. Thanks for Chris A for finding this one. CFAQJOTYQHT7JYIT\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"AI can&amp;#039;t cross this line and we don&amp;#039;t know why.\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_5eqRuVp65eY\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F5eqRuVp65eY%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/5eqRuVp65eY\" \/><meta itemprop=\"duration\" content=\"PT24M7S\" \/><meta itemprop=\"uploadDate\" content=\"2024-09-13T18:09:57Z\" \/><\/div><div id=\"lyte_5eqRuVp65eY\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F5eqRuVp65eY%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">AI can&#039;t cross this line and we don&#039;t know why.<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/5eqRuVp65eY\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2F5eqRuVp65eY%2F0.jpg\" alt=\"AI can&amp;#039;t cross this line and we don&amp;#039;t know why.\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"Have we discovered an ideal gas law for AI? Head to https:\/\/brilliant.org\/WelchLabs\/ to try Brilliant for free for 30 days and get 20% off an annual premium subscription. Welch Labs Imaginary Numbers Book! https:\/\/www.welchlabs.com\/resources\/imaginary-numbers-book Welch Labs Posters: https:\/\/www.welchlabs.com\/resources Support Welch Labs on Patreon! https:\/\/www.patreon.com\/welchlabs Special thanks to Patrons: Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti, Brian Henry, Tim Palade, Petar Vecutin Learn more about WelchLabs! https:\/\/www.welchlabs.com TikTok: https:\/\/www.tiktok.com\/@welchlabs Instagram: https:\/\/www.instagram.com\/welchlabs REFERENCES A Neural Scaling Law from the Dimension of the Data Manifold: https:\/\/arxiv.org\/pdf\/2004.10802 First 2020 OpenAI Scaling Paper: https:\/\/arxiv.org\/pdf\/2001.08361 GPT-3 Paper: https:\/\/arxiv.org\/pdf\/2005.14165 Second 202 OpenAI Scaling Paper: https:\/\/arxiv.org\/pdf\/2010.14701 Google Deepmind \u201cChinchilla Scaling\u201d Paper: https:\/\/arxiv.org\/abs\/2203.15556 Nice summary of Chinchilla Scaling: https:\/\/www.lesswrong.com\/posts\/6Fpvch8RR29qLEWNH\/chinchilla-s-wild-implications GPT-4 Technical Report: https:\/\/arxiv.org\/pdf\/2303.08774 Nice Neural Scaling Laws Summary: https:\/\/www.lesswrong.com\/posts\/Yt5wAXMc7D2zLpQqx\/an-140-theoretical-models-that-predict-scaling-laws Explaining Neural Scaling Laws: https:\/\/arxiv.org\/pdf\/2102.06701 High Cost of Training GPT-4: https:\/\/www.wired.com\/story\/openai-ceo-sam-altman-the-age-of-giant-ai-models-is-already-over\/ Nvidia V100 FLOPs: https:\/\/lambdalabs.com\/blog\/demystifying-gpt-3 Nvidia V100 Original Price: [https:\/\/www.microway.com\/hpc-tech-tips\/nvidia-tesla-v100-price-analysis\/#:~:text=Tesla GPU model,Key Points](https:\/\/www.microway.com\/hpc-tech-tips\/nvidia-tesla-v100-price-analysis\/#:~:text=TeslaGPUmodel,KeyPoints) Great paper on scaling up training infrastructure: https:\/\/arxiv.org\/pdf\/2104.04473 Eight Things to Know about LLMs: https:\/\/arxiv.org\/abs\/2304.00612 Emergent Properties of LLMs: https:\/\/arxiv.org\/abs\/2206.07682 Theoretical Motivation for Cross Entropy (Section 6.2): https:\/\/www.deeplearningbook.org\/ Some papers that appear to pass the compute efficient frontier https:\/\/arxiv.org\/pdf\/2206.14486 https:\/\/arxiv.org\/abs\/2210.11399 CFAQJOTYQHT7JYIT Leaked GPT-4 training info https:\/\/patmcguinness.substack.com\/p\/gpt-4-details-revealed https:\/\/www.semianalysis.com\/p\/gpt-4-architecture-infrastructure https:\/\/epochai.org\/blog\/tracking-large-scale-ai-models\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio\"><div class=\"lyte-wrapper\" title=\"The Dark Matter of AI [Mechanistic Interpretability]\" style=\"width:853px;max-width:100%;margin:5px auto;\"><div class=\"lyMe\" id=\"WYL_UGO_Ehywuxc\" itemprop=\"video\" itemscope itemtype=\"https:\/\/schema.org\/VideoObject\"><div><meta itemprop=\"thumbnailUrl\" content=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUGO_Ehywuxc%2Fhqdefault.jpg\" \/><meta itemprop=\"embedURL\" content=\"https:\/\/www.youtube.com\/embed\/UGO_Ehywuxc\" \/><meta itemprop=\"duration\" content=\"PT24M9S\" \/><meta itemprop=\"uploadDate\" content=\"2024-12-23T18:39:19Z\" \/><\/div><div id=\"lyte_UGO_Ehywuxc\" data-src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUGO_Ehywuxc%2Fhqdefault.jpg\" class=\"pL\"><div class=\"tC\"><div class=\"tT\" itemprop=\"name\">The Dark Matter of AI [Mechanistic Interpretability]<\/div><\/div><div class=\"play\"><\/div><div class=\"ctrl\"><div class=\"Lctrl\"><\/div><div class=\"Rctrl\"><\/div><\/div><\/div><noscript><a href=\"https:\/\/youtu.be\/UGO_Ehywuxc\" rel=\"nofollow\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/infernews.com\/blog\/wp-content\/plugins\/wp-youtube-lyte\/lyteCache.php?origThumbUrl=https%3A%2F%2Fi.ytimg.com%2Fvi%2FUGO_Ehywuxc%2F0.jpg\" alt=\"The Dark Matter of AI [Mechanistic Interpretability]\" width=\"853\" height=\"460\" \/><br \/>Watch this video on YouTube<\/a><\/noscript><meta itemprop=\"description\" content=\"Take your personal data back with Incogni! Use code WELCHLABS at the link below and get 60% off an annual plan: http:\/\/incogni.com\/welchlabs Welch Labs Imaginary Numbers Book! https:\/\/www.welchlabs.com\/resources\/imaginary-numbers-book Welch Labs Posters:https:\/\/www.welchlabs.com\/resources Special Thanks to Patrons https:\/\/www.patreon.com\/welchlabs Juan Benet, Ross Hanson, Yan Babitski, AJ Englehardt, Alvin Khaled, Eduardo Barraza, Hitoshi Yamauchi, Jaewon Jung, Mrgoodlight, Shinichi Hayashi, Sid Sarasvati, Dominic Beaumont, Shannon Prater, Ubiquity Ventures, Matias Forti, Brian Henry, Tim Palade, Petar Vecutin, Nicolas baumann, Jason Singh, Robert Riley, vornska, Barry Silverman My Gemma walkthrough notebook: https:\/\/colab.research.google.com\/drive\/1Y68yNr5TcHr4G5RJ0QHZhKkDe55AUkVj?usp=sharing Most animations made with Manim: https:\/\/github.com\/3b1b\/manim References and Further Reading Chris Olah\u2019s original \u201cDark Matter of Neural Networks\u201d post: https:\/\/transformer-circuits.pub\/2024\/july-update\/index.html#dark-matter Great recent interview with Chris Olah: https:\/\/www.youtube.com\/watch?v=ugvHCXCOmm4 Gemma Scope: https:\/\/arxiv.org\/pdf\/2408.05147 Experiment with SAEs yourself here! https:\/\/www.neuronpedia.org\/ Relevant work from the Anthropic team: https:\/\/transformer-circuits.pub\/2022\/toy_model\/index.html https:\/\/transformer-circuits.pub\/2023\/monosemantic-features https:\/\/transformer-circuits.pub\/2024\/scaling-monosemanticity\/ Excellent intro Mechanistic Interpretability: https:\/\/arena3-chapter1-transformer-interp.streamlit.app\/1.2_Intro_to_Mech_Interp Neel Nanda\u2019s Mechanistic Interpretability Explainer: https:\/\/dynalist.io\/d\/n2ZWtnoYHrU1s4vnFSAQ519J Transformer Lens: https:\/\/github.com\/TransformerLensOrg\/TransformerLens SAE Lens: https:\/\/jbloomaus.github.io\/SAELens\/ Technical Notes 1. There are more advanced and more meaningful ways to map mid layer vectors to outputs, see: https:\/\/arxiv.org\/pdf\/2303.08112, https:\/\/neuralblog.github.io\/logit-prisms\/, https:\/\/www.lesswrong.com\/posts\/AcKRB8wDpdaN6v6ru\/interpreting-gpt-the-logit-lens 2. The 6x2304 matrix is actually 7x2304, we\u2019re ignoring the \/bos token. 3. Gemma also includes positional embeddings and lots and lots of normalization layers, which we didn\u2019t really cover 4. I\u2019m conflating tokens and words sometimes, in this example each word is a token, so we don\u2019t have to worry about it too much 5. The \u201c_\u201d characters represent spaces in the token strings CFAQJOTYQHT7JYIT\"><\/div><\/div><div class=\"lL\" style=\"max-width:100%;width:853px;margin:5px auto;\"><\/div><figcaption><\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"googlesitekit_rrm_CAowvqSiDA:productID":"","footnotes":""},"class_list":["post-4693","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/4693","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/comments?post=4693"}],"version-history":[{"count":0,"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/pages\/4693\/revisions"}],"wp:attachment":[{"href":"https:\/\/infernews.com\/blog\/wp-json\/wp\/v2\/media?parent=4693"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}