BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//AKTUARVEREINIGUNG ÖSTERREICHS (AVÖ) - ECPv6.15.20//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://avoe.at
X-WR-CALDESC:Veranstaltungen für AKTUARVEREINIGUNG ÖSTERREICHS (AVÖ)
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Vienna
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Vienna:20261012T090000
DTEND;TZID=Europe/Vienna:20261013T133000
DTSTAMP:20260424T220743
CREATED:20260407T085023Z
LAST-MODIFIED:20260407T085023Z
UID:10000641-1791795600-1791898200@avoe.at
SUMMARY:EAA Web Session 'From Deep Learning to Transformers: Foundations of Modern LLMs'
DESCRIPTION:Deep learning (DL) pertains to the field of artificial intelligence and is great at extracting and mastering the often highly non linear patterns of a given process\, whatever this process might be. The only main requirement is the availability of a large amount of data that describes the behaviour of the process under different conditions and a truckload of computational power. With data collection becoming cheaper and computational power still following Moore’s law\, fitting DL models that produce extremely useful predictions has become a practical reality. \nWhile this family of models is broad\, one particular architecture has reshaped the field of text analysis: the transformer. Transformers were originally introduced to overcome the limitations of earlier neural networks when dealing with sequential data such as text\, where long range dependencies and contextual meaning matter. Their ability to process entire sequences in parallel and to model relationships between all words at once made them uniquely suited for language tasks. \nLarge Language Models (LLMs) are essentially very large transformer networks trained on massive text corpora. They represent a natural continuation of deep learning\, but with capabilities—reasoning over text\, summarising documents\, generating explanations—that go far beyond what earlier DL architectures could achieve. Understanding LLMs therefore benefits from first understanding the deep learning principles on which they are built.\nAnmeldeschluss: 2026-10-08\nLink: https://actuarial-academy.com/en/continuing-education/upcoming-trainings/detail/from-deep-learning-to-transformers-foundations-of-modern-llms-e0578/
URL:https://avoe.at/event/eaa-web-session-from-deep-learning-to-transformers-foundations-of-modern-llms/
LOCATION:Online/Streaming
CATEGORIES:European Actuarial Academy (EAA)
END:VEVENT
END:VCALENDAR