Ten from the weekend 09/03: A few interesting reads that I came across:
Focus areas: Blockchain| ML-AI| Data science/Analytics applications |eSports| CRISPR| Design thinking
1. Amazon’s $43B ad business, explained: https://www.readtrung.com/p/amazons-43b-ad-business-explained
2. BarbAIrians at the Gate: The Financial Opportunity of AI: https://a16z.com/financial-opportunity-of-ai/
3. Getting Washington and Silicon Valley to tame AI: https://80000hours.org/podcast/episodes/mustafa-suleyman-getting-washington-and-silicon-valley-to-tame-ai/
4. LangSmith, a unified platform for debugging, testing, evaluating, and monitoring your LLM applications: https://blog.langchain.dev/announcing-langsmith/ || https://blog.langchain.dev/using-langsmith-to-support-fine-tuning-of-open-source-llms/
5. Google Gemini Eats The World — Gemini Smashes GPT-4 By 5X, The GPU-Poors: https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini || https://www.servethehome.com/google-details-tpuv4-and-its-crazy-optically-reconfigurable-ai-network/
Rest are going to be long reads -FYI:
6. https://hazyresearch.stanford.edu/legalbench/tasks/ || Interesting if you are looking into authoring apps or use cases in other industries too. Legal research is a good starting point
7. https://www.deeplearning.ai/short-courses/large-language-models-semantic-search/
8. A Survey on Large Language Model based Autonomous Agents: https://arxiv.org/abs/2308.11432v1
9. Patterns for Building LLM-based Systems & Products: https://eugeneyan.com/writing/llm-patterns/
10. https://towardsdatascience.com/explaining-vector-databases-in-3-levels-of-difficulty-fc392e48ab78
Lemur: The State-of-the-art Open Pretrained Large Language Models Balancing Text and Code Capabilities: https://github.com/OpenLemur/Lemur