background preloader

Learn Prompting: Your Guide to Communicating with AI

Learn Prompting: Your Guide to Communicating with AI

5 Ways We're Using AI at Work - IDEO Are you using ChatGPT yet? If the answer is no, keep reading. For anyone whose day job involves creating things—proposals or presentations that require fact-finding, writing, imagery, and video—it's changing how we make stuff. Love it or fear it, generative AI may be our new co-worker. This phase of the AI revolution feels personal. As we would do with any new colleague, we hope to learn how to have constructive relationships with these technologies, understand them better, and collaborate. Way back in 2019—before generative AI was everywhere—we created a deck of AI Ethics cards, which you can download here. Here are a few of the ways we’ve been experimenting so far: 1. The classic image of design thinking is people huddled around a Post-It–littered foam core board. Knowing that the AI doesn’t have the larger context of the project at hand, Takashi and his teammates use Notion AI’s output as supplemental input. 2. Designers come up with a lot of ideas. {{video-1}} 3. {{video-2}} 4. 5.

Midjourney prompt builder What is prompt engineering and what is it used for? - tolk.ai - Generative AI powered Chatbot and Livechat solutions Ce site web stocke des cookies sur votre ordinateur. Ces cookies permettent d'améliorer votre expérience de site web et de fournir des services plus personnalisés, à la fois sur ce site et via d'autres médias. Pour obtenir plus d'informations sur les cookies que nous utilisons, consultez notre politique de confidentialité. Vos informations personnelles ne feront l'objet d'aucun suivi lors de votre visite sur notre site. Cependant, en vue de respecter vos préférences, nous devrons utiliser un petit cookie pour que nous n'ayons pas à vous les redemander. Menu Demo Login Publisher of generative ia software dedicated to customer relations. tolk.ai offers livechat and chatbot solutions boosted by artificial intelligence. Resources ROI Kalculator Buy a demo Our solutions Bot Agent Analytics Our contents Blog Case studies About us Contact us Shopping Basket

Subscribe to How to Talk to AI How to Talk to AI HTTTA podcast's juiciest tidbits in your inbox By Wes the Synthmind · Launched 3 months ago By registering you agree to Substack's Terms of Service, our Privacy Policy, and our Information Collection Notice 2nd Workshop on ‘Public Interest AI’ – HIIG The Alexander von Humboldt Institute for Internet and Society (HIIG) is organising a second workshop on public interest AI based on the discussion in the KI2023 edition. The event will be held in English. 2nd Workshop on ‘Public Interest AI’ Monday, 23 September 2024 | 10.00am – 4.15pm Hosted by Theresa Züger & Hadi AsghariCo-located with the 47th German Conference on AI (KI 2024) Julius-Maximilians Universität Würzburg | Campus Hubland Süd, building M2 Go to agenda Note: Deadline for submission has already passed (12 July 2024) Aim & Scope The number of AI projects aiming to serve the common good or a public interest is increasing rapidly. This workshop builds upon the lively discussions we had in the KI2023 edition. The motivation to use AI for a common good is claimed widely. This workshop addresses AI from an interdisciplinary perspective bringing the goal of serving public interest to the forefront. Agenda The program will be aligned with the conference schedule for start & breaks Dr.

eCommerce ChatGPT Prompts L'Intelligence Artificielle... avec intelligence ! - Cours - FUN MOOC Vous êtes autorisé à : Partager — copier, distribuer et communiquer le matériel par tous moyens et sous tous formatsAdapter — remixer, transformer et créer à partir du matériel Selon les conditions suivantes : Attribution — Vous devez créditer l'oeuvre, intégrer un lien vers la licence et indiquer si des modifications ont été effectuées à l'oeuvre. When Should You Fine-Tune LLMs?. There has been a flurry of exciting… | by Skanda Vivek | May, 2023 The problem of giving all the necessary information to the model to answer questions is now offloaded from the model architecture to a database, containing document chunks. The documents of relevance can then be found by computing similarities between the question and the document chunks. This is done typically by converting the chunks and question into word embedding vectors, and computing cosine similarities between chunks and question, and finally choosing only those chunks above a certain cosine similarity as relevant context. Finally, the question and context can be combined into a prompt as below, and fed into an LLM API like ChatGPT: You might ask — why not feed the entire documents and question into the prompt instead of separating out into chunks? Offloading documents to a database and querying using closed LLM APIs might work well in cases where the answer is obviously present in these documents. Ok, so you tried out ChatGPT or BARD — and you didn’t like it. And it returned:

Related: