1 00:00:00,000 --> 00:00:04,080 Welcome back to the deep dive. This is where we take a piece of complex tech 2 00:00:04,080 --> 00:00:08,800 We peel back all that jargon and we just give you the essentials and today we are 3 00:00:08,800 --> 00:00:11,520 plunging right into the core of big data 4 00:00:11,520 --> 00:00:15,820 Focusing on something that's just critical for handling these massive volumes of 5 00:00:15,820 --> 00:00:17,880 data that are arriving, you know right now 6 00:00:17,880 --> 00:00:19,440 We're talking about Apache druids 7 00:00:19,440 --> 00:00:23,940 So if you're out there dealing with just mountains of streaming data or maybe you're 8 00:00:23,940 --> 00:00:26,160 trying to power a dashboard that has to answer questions 9 00:00:26,720 --> 00:00:32,030 Instantly, I mean not in minutes but in milliseconds then this deep dive is for you. 10 00:00:32,030 --> 00:00:33,640 Our mission is pretty simple 11 00:00:33,640 --> 00:00:38,140 we're going to break down Apache drew this powerful real-time analytics database 12 00:00:38,140 --> 00:00:40,480 and just give you a really clear entry point a way to 13 00:00:40,480 --> 00:00:44,240 Quickly grasp what it is how it works and why it's becoming so important in the big 14 00:00:44,240 --> 00:00:44,760 data world 15 00:00:44,760 --> 00:00:48,230 We've been digging through the core descriptions the feature lists all the 16 00:00:48,230 --> 00:00:51,760 community docs and we're ready to well distill that for you 17 00:00:51,760 --> 00:00:55,220 But first I just want to mention that this deep dive is supported by safe server 18 00:00:55,220 --> 00:00:57,380 safe server handles the hosting for this type 19 00:00:57,380 --> 00:01:00,960 Of software and supports you in your digital transformation. You can find out more 20 00:01:00,960 --> 00:01:03,320 at www.safeserver.de 21 00:01:03,320 --> 00:01:07,820 That's www.safeserver.de. Right. So let's just start with the fundamental 22 00:01:07,820 --> 00:01:10,760 definition because Apache drew it isn't a general-purpose tool 23 00:01:10,760 --> 00:01:13,420 It's very very specific. It's a high-performance 24 00:01:13,420 --> 00:01:19,440 real-time analytics database and its design goal is incredibly focused 25 00:01:19,600 --> 00:01:25,200 It's all about delivering sub-second queries on just immense amounts of data sub-second 26 00:01:25,200 --> 00:01:29,060 Okay. Yeah, and that's on both streaming data that just arrived and you know 27 00:01:29,060 --> 00:01:34,400 Pedabytes of historical data all at scale and under huge load. Think of it this way 28 00:01:34,400 --> 00:01:37,740 It's an open-source alternative to your traditional data warehouse 29 00:01:37,740 --> 00:01:42,880 But it's been optimized specifically for that high-speed interactive analysis 30 00:01:42,880 --> 00:01:47,220 Oh LAP right online analytical processing. Exactly. It's built to answer those 31 00:01:47,220 --> 00:01:48,960 complex analytical questions instantly 32 00:01:48,960 --> 00:01:52,940 Okay, let's unpack that OLAP thing immediately because I mean there are countless 33 00:01:52,940 --> 00:01:53,920 databases out there 34 00:01:53,920 --> 00:01:57,500 You've got relational no sequel data warehouses. Why do we need a different kind of 35 00:01:57,500 --> 00:01:59,240 database like druid just for analytics? 36 00:01:59,240 --> 00:02:02,880 What's the specific problem here that those other tools can't solve? 37 00:02:02,880 --> 00:02:07,040 The problem is what you could call the time to insight or you know time to action 38 00:02:07,040 --> 00:02:12,100 Traditional systems, especially the ones built for transactions OLTP. They're great 39 00:02:12,100 --> 00:02:13,920 for say recording a single sale 40 00:02:13,920 --> 00:02:19,180 Right one record at a time exactly and even traditional data warehouses, which are 41 00:02:19,180 --> 00:02:21,060 good for big complex reports 42 00:02:21,060 --> 00:02:25,260 They're often optimized for batch loading. That means query times can be slow and 43 00:02:25,260 --> 00:02:27,580 you're asking ad hoc real-time questions 44 00:02:27,580 --> 00:02:31,520 Druid is designed for that exact moment when you need a live answer 45 00:02:31,520 --> 00:02:36,320 Right now so for instance you run a gaming platform and you need to know the total 46 00:02:36,320 --> 00:02:38,880 spend of users in Berlin in the last minute 47 00:02:39,280 --> 00:02:43,840 Filtered by their device type Wow. Okay, that's specific it is and it requires 48 00:02:43,840 --> 00:02:45,960 instantaneous analysis across 49 00:02:45,960 --> 00:02:50,870 Maybe billions of events druid steps in to make those ultra-fast highly dimensional 50 00:02:50,870 --> 00:02:52,160 workflows possible 51 00:02:52,160 --> 00:02:56,450 So give me the wow metric here if I'm used to my reports taking I don't know 30 52 00:02:56,450 --> 00:02:57,120 seconds to run 53 00:02:57,120 --> 00:03:00,160 What kind of speed are we really talking about with druid? We are talking about 54 00:03:00,160 --> 00:03:03,720 true millisecond response times millisecond the documentation 55 00:03:04,240 --> 00:03:09,240 It consistently highlights its ability to run these really complex high-dimensional 56 00:03:09,240 --> 00:03:10,180 OLP queries in 57 00:03:10,180 --> 00:03:14,460 Milliseconds and that's even against datasets that range from billions to get this 58 00:03:14,460 --> 00:03:16,000 trillions of rows trillions 59 00:03:16,000 --> 00:03:20,500 Okay, and critically it does this without you having to pre-aggregate the data or 60 00:03:20,500 --> 00:03:22,240 pre-define the exact query 61 00:03:22,240 --> 00:03:26,240 You're gonna ask you can just ask anything instantly that speed is impressive 62 00:03:26,240 --> 00:03:31,800 But that usually implies a single highly optimized query what happens when? 63 00:03:32,040 --> 00:03:35,820 Say a thousand users hit that same data all at once 64 00:03:35,820 --> 00:03:40,040 Does the performance just tank and that's where its concurrency power is? 65 00:03:40,040 --> 00:03:43,900 So essential speed is pretty meaningless if the system just buckles under load 66 00:03:43,900 --> 00:03:47,880 Druids architecture as well. It's fundamentally built to handle that kind of 67 00:03:47,880 --> 00:03:48,640 immense concurrent load 68 00:03:48,640 --> 00:03:52,620 We're talking about supporting anywhere from hundreds to even a hundred thousands 69 00:03:52,620 --> 00:03:53,700 of queries per second 70 00:03:53,700 --> 00:03:58,370 QPS a hundred thousand queries per second maintaining that consistent low latency 71 00:03:58,370 --> 00:03:59,300 performance 72 00:03:59,400 --> 00:04:04,520 This is precisely why it's used to power consumer facing ui's or internal dashboards 73 00:04:04,520 --> 00:04:05,280 that you know 74 00:04:05,280 --> 00:04:08,980 Hundreds of employees are using all day every click is a new query every single 75 00:04:08,980 --> 00:04:10,360 click on a filter a 76 00:04:10,360 --> 00:04:14,380 Visualization that requires a fresh lightning fast query 77 00:04:14,380 --> 00:04:17,960 Druid is built for that exact operational load 78 00:04:17,960 --> 00:04:21,240 Okay, but achieving that kind of performance usually means you have to throw 79 00:04:21,240 --> 00:04:23,680 massive expensive hardware at the problem 80 00:04:23,880 --> 00:04:28,380 Does druid manage to be cost-efficient? It absolutely does and it addresses that 81 00:04:28,380 --> 00:04:31,120 cost question mainly through resource efficiency 82 00:04:31,120 --> 00:04:34,700 Because the architecture is so optimized and we'll get into the compression and 83 00:04:34,700 --> 00:04:36,640 indexing in a bit. It just 84 00:04:36,640 --> 00:04:40,940 Significantly reduces the total hardware footprint you need compared to other 85 00:04:40,940 --> 00:04:44,200 solutions. Yeah compared to many competing data warehouses 86 00:04:44,200 --> 00:04:46,920 it's really designed for massive scale, but with 87 00:04:46,920 --> 00:04:51,830 Resource conservation built right into its DNA. It makes the total cost of 88 00:04:51,830 --> 00:04:53,400 ownership much much lower over time 89 00:04:53,880 --> 00:04:56,880 Okay, so this is where it gets really interesting for me the real-time aspect 90 00:04:56,880 --> 00:05:01,740 We're in a streaming world now data from logs sensors clicks. It never stops 91 00:05:01,740 --> 00:05:05,720 So how does druid deal with data that showed up one second ago? This is its core 92 00:05:05,720 --> 00:05:06,280 differentiator 93 00:05:06,280 --> 00:05:12,450 True stream ingestion druid is you could say stream native if you've ever worked 94 00:05:12,450 --> 00:05:14,520 with traditional ETL processes 95 00:05:14,520 --> 00:05:18,640 You know, there's always a delay the data lands. It gets transformed then it's 96 00:05:18,640 --> 00:05:20,300 loaded that can take minutes 97 00:05:20,440 --> 00:05:25,440 Sometimes hours the dreaded batch window right druid just eliminates that wait time 98 00:05:25,440 --> 00:05:30,490 It has native baked in integration with the big streaming platforms like Apache 99 00:05:30,490 --> 00:05:30,840 Kafka 100 00:05:30,840 --> 00:05:36,600 Amazon kinesis this allows it to support query on arrival it ingests and 101 00:05:36,600 --> 00:05:40,580 Immediately makes millions of events per second available for query immediately 102 00:05:40,580 --> 00:05:43,840 with guaranteed consistency and ultra low latency 103 00:05:43,840 --> 00:05:48,260 So if a critical event happens like a security alert or a sudden drop in sales 104 00:05:48,260 --> 00:05:53,210 I can query that event and act on it within a second not wait for some nightly 105 00:05:53,210 --> 00:05:54,200 batch process 106 00:05:54,200 --> 00:05:56,060 That is exactly the use case 107 00:05:56,060 --> 00:05:59,520 You are querying the live stream at the same time as all the historical data 108 00:05:59,520 --> 00:06:02,420 You can instantly combine the hot fresh data 109 00:06:02,420 --> 00:06:06,400 What happened the last minute with a massive cold data from the last five years all 110 00:06:06,400 --> 00:06:08,040 in a single unified? 111 00:06:08,040 --> 00:06:12,350 Subsecond query and that gives you the complete picture a truly complete up to the 112 00:06:12,350 --> 00:06:14,000 second operational picture 113 00:06:14,000 --> 00:06:17,980 It's essential for things like monitoring and alerting apps. Okay, we've 114 00:06:17,980 --> 00:06:20,780 established. It's ridiculously fast. It's scalable 115 00:06:20,780 --> 00:06:26,340 It's stream made of now. I want to look under the hood. How how does it sustain? 116 00:06:26,340 --> 00:06:29,340 millisecond speed on 117 00:06:29,340 --> 00:06:31,460 trillions of rows 118 00:06:31,460 --> 00:06:33,460 What's the secret sauce in the architecture? 119 00:06:33,460 --> 00:06:38,180 Well, the speeds no accident. It starts the very moment the data is ingested the 120 00:06:38,180 --> 00:06:39,840 second data touches druid 121 00:06:39,840 --> 00:06:43,640 It's automatically transformed into what they call an optimized data format 122 00:06:43,640 --> 00:06:47,940 It's a multi-layered process, but we can simplify it down to say three key 123 00:06:47,940 --> 00:06:48,520 principles 124 00:06:48,520 --> 00:06:52,810 Okay, first the data is columnarized. This is a huge departure from traditional row 125 00:06:52,810 --> 00:06:53,660 based databases 126 00:06:53,660 --> 00:06:58,000 Imagine your data is a physical library a row based database stores every books 127 00:06:58,000 --> 00:06:59,980 info author title date subject all 128 00:06:59,980 --> 00:07:02,100 Bundled together on the shelf. That's a row 129 00:07:02,100 --> 00:07:05,650 So if I just want to know all the subjects I have to pull every single book record 130 00:07:05,650 --> 00:07:07,020 off the shelf precisely 131 00:07:07,020 --> 00:07:09,340 You have to read through everything it takes time at waste operation 132 00:07:09,900 --> 00:07:13,880 Columnar storage is like taking that library and putting all the authors in one 133 00:07:13,880 --> 00:07:16,780 aisle all the titles in another and all the subjects in 134 00:07:16,780 --> 00:07:20,300 A third so if you only need to run a query on the city column 135 00:07:20,300 --> 00:07:24,280 You only read the city aisle you skip all the other massive amounts of unrelated 136 00:07:24,280 --> 00:07:24,380 data 137 00:07:24,380 --> 00:07:28,880 It just dramatically cuts down the IO. Okay, so columnar storage is step one for 138 00:07:28,880 --> 00:07:29,460 efficiency 139 00:07:29,460 --> 00:07:34,750 What's next step two is that the data is profoundly optimized for time series 140 00:07:34,750 --> 00:07:35,500 analysis? 141 00:07:36,300 --> 00:07:41,560 Specifically through time indexing since most data in druid is time series data 142 00:07:41,560 --> 00:07:42,920 events happening over time 143 00:07:42,920 --> 00:07:47,420 It immediately indexes everything based on time. It's like installing a master 144 00:07:47,420 --> 00:07:48,280 clock in your data 145 00:07:48,280 --> 00:07:53,000 So you can jump to a specific time range instantly instantly the last week the 146 00:07:53,000 --> 00:07:54,300 third quarter of 2021 147 00:07:54,300 --> 00:07:57,670 Whatever without scanning anything irrelevant and the third principle is all about 148 00:07:57,670 --> 00:07:58,780 compression and encoding 149 00:07:58,780 --> 00:08:02,900 This is the digital shorthand the data gets highly optimized using techniques like 150 00:08:02,900 --> 00:08:05,320 dictionary encoding and bitmap indexing 151 00:08:05,460 --> 00:08:08,580 Okay, that sounds like serious jargon. Can you make that a bit more accessible? 152 00:08:08,580 --> 00:08:10,700 What's dictionary encoding actually doing for me? 153 00:08:10,700 --> 00:08:15,480 Of course think of dictionary encoding is creating a little lookup table for any 154 00:08:15,480 --> 00:08:16,540 repetitive values 155 00:08:16,540 --> 00:08:20,740 So if you have a column with a million rows that only contain five city names 156 00:08:20,740 --> 00:08:24,700 New York London Paris, Tokyo Sydney druid 157 00:08:24,700 --> 00:08:29,100 Doesn't store the full-text New York a million times. That would be crazy 158 00:08:29,140 --> 00:08:33,810 It would instead it assigns it a tiny numerical code like one it stores that 159 00:08:33,810 --> 00:08:36,740 dictionary mapping just once and then stores 160 00:08:36,740 --> 00:08:39,700 Millions of tiny ones in the actual data segment 161 00:08:39,700 --> 00:08:44,210 Ah, so you save a massive amount of storage space and because the index is so much 162 00:08:44,210 --> 00:08:47,500 smaller the query engine can search it way faster 163 00:08:47,500 --> 00:08:51,520 It's looking for tiny numbers not long strings. Exactly. It's an ultra-efficient 164 00:08:51,520 --> 00:08:53,620 data reduction that directly boosts speed 165 00:08:53,900 --> 00:08:57,600 The result of all this is that every piece of data is basically a pre-optimized 166 00:08:57,600 --> 00:08:59,540 ready-to-run package from the second 167 00:08:59,540 --> 00:09:01,580 It's ingested. Okay, so that's the data structure 168 00:09:01,580 --> 00:09:04,900 How does the query engine actually use that to get to sub second speeds? 169 00:09:04,900 --> 00:09:07,600 So the engine uses what's called a scatter gatherer approach 170 00:09:07,600 --> 00:09:11,890 Yeah, and the core philosophy is don't move the data don't want the data data 171 00:09:11,890 --> 00:09:13,900 movement is slow. It's expensive 172 00:09:13,900 --> 00:09:19,050 So druid make sure those optimized data segments are loaded directly into memory or 173 00:09:19,050 --> 00:09:19,700 local 174 00:09:19,940 --> 00:09:25,020 SSDs on the specific nodes where they live when you run a query the system 175 00:09:25,020 --> 00:09:29,210 Intelligently figures out which nodes hold the data you need it scatters the work 176 00:09:29,210 --> 00:09:29,780 to those nodes 177 00:09:29,780 --> 00:09:33,870 They process it locally using their own CPU and memory and then they only gather 178 00:09:33,870 --> 00:09:35,220 the small result sets back 179 00:09:35,220 --> 00:09:38,940 That dramatically cuts down on network latency 180 00:09:38,940 --> 00:09:42,830 It does it avoids reading anything extra and keeps the network traffic to an 181 00:09:42,830 --> 00:09:43,820 absolute minimum 182 00:09:43,820 --> 00:09:48,220 So instead of hauling the entire library to a central desk for processing 183 00:09:48,220 --> 00:09:51,940 You just send little instruction cards to the specific aisles and they send back 184 00:09:51,940 --> 00:09:55,020 only the chapter summaries you asked for that's a perfect analogy 185 00:09:55,020 --> 00:09:59,140 And that localized processing is just crucial for maintaining performance under all 186 00:09:59,140 --> 00:10:00,140 that concurrency 187 00:10:00,140 --> 00:10:02,140 We talked about speaking of concurrency 188 00:10:02,140 --> 00:10:07,420 Applications scale and you know, they often scale unpredictably if my user base triples 189 00:10:07,420 --> 00:10:07,780 tomorrow 190 00:10:07,780 --> 00:10:12,080 Can I scale the system quickly and reliably that brings us to its elastic 191 00:10:12,080 --> 00:10:12,860 architecture? 192 00:10:13,540 --> 00:10:18,460 Druid was designed from the ground up to be distributed all its components ingestion 193 00:10:18,460 --> 00:10:20,760 queries orchestration deep storage 194 00:10:20,760 --> 00:10:24,420 They're all loosely coupled and why does loose coupling matter? 195 00:10:24,420 --> 00:10:28,470 It means you can scale the query processing nodes completely independently from the 196 00:10:28,470 --> 00:10:30,780 ingestion nodes or from the storage layer 197 00:10:30,780 --> 00:10:33,980 So if you suddenly need to handle ten times the query traffic 198 00:10:33,980 --> 00:10:37,100 You could just provision and add ten times the query nodes in minutes 199 00:10:37,100 --> 00:10:40,800 You never have to interrupt the system or stop data ingestion. It's incredibly 200 00:10:40,800 --> 00:10:43,300 flexible scaling both vertically and horizontally 201 00:10:43,540 --> 00:10:48,040 Yes, and what about when things go wrong? If a node goes down does my whole 202 00:10:48,040 --> 00:10:50,500 dashboard crash? Absolutely not 203 00:10:50,500 --> 00:10:55,900 Reliability is a cornerstone druid has non-stop reliability features built in 204 00:10:55,900 --> 00:10:59,520 things like continuous backup to deep storage like s3 205 00:10:59,520 --> 00:11:03,680 Or HDFS automated recovery and multi node replication of data segments 206 00:11:03,680 --> 00:11:06,680 So there's always a copy always if a query node fails 207 00:11:06,680 --> 00:11:10,680 Another node that already holds a copy of that data just automatically steps in it 208 00:11:10,680 --> 00:11:12,780 ensures high availability and durability 209 00:11:13,020 --> 00:11:14,740 This is all very powerful 210 00:11:14,740 --> 00:11:18,180 But for the user who needs to adopt this the learning curve can be steep 211 00:11:18,180 --> 00:11:22,730 How accessible is druid for developers or analysts who are used to more traditional 212 00:11:22,730 --> 00:11:23,140 tools? 213 00:11:23,140 --> 00:11:27,140 Accessibility has been a huge focus in its recent development 214 00:11:27,140 --> 00:11:32,100 I mean for one developers and analysts can use the familiar SQL API for everything 215 00:11:32,100 --> 00:11:34,940 for everything not just querying for all end-to-end 216 00:11:34,940 --> 00:11:38,460 Data operations you can use standard SQL for querying sure 217 00:11:38,460 --> 00:11:42,940 But also for defining ingestion tasks and performing data transformations 218 00:11:43,060 --> 00:11:47,070 If you know sequel you have a massive head start that lowers the barrier to entry 219 00:11:47,070 --> 00:11:47,980 significantly 220 00:11:47,980 --> 00:11:50,340 You don't have to learn some proprietary query language 221 00:11:50,340 --> 00:11:55,480 What about defining the schema the structure of the incoming data is that a rigid 222 00:11:55,480 --> 00:11:56,780 manual process? 223 00:11:56,780 --> 00:12:01,690 That's another big accessibility when schema auto discovery gives you the ease of a 224 00:12:01,690 --> 00:12:02,680 schemal a system 225 00:12:02,680 --> 00:12:05,120 but with the performance benefits of a strict schema as 226 00:12:05,120 --> 00:12:10,490 Data streams in druid can automatically detect define and update column names and 227 00:12:10,490 --> 00:12:11,700 their data types 228 00:12:11,700 --> 00:12:15,400 You don't have to stop everything and manually define a hundred columns before you 229 00:12:15,400 --> 00:12:16,220 can core your data 230 00:12:16,220 --> 00:12:20,800 It just handles that on the fly. So if I were a new user trying to load my first 231 00:12:20,800 --> 00:12:21,820 data stream 232 00:12:21,820 --> 00:12:27,160 What's that actual user experience like am I writing yaml scripts from day one? Not 233 00:12:27,160 --> 00:12:27,820 at all 234 00:12:27,820 --> 00:12:33,060 There's a really practical built-in web console. It's designed for easy interaction 235 00:12:33,060 --> 00:12:37,180 Through that console you get a point-and-click wizard for setting up ingestion 236 00:12:37,180 --> 00:12:40,060 whether you're loading a huge historical batch file or 237 00:12:40,380 --> 00:12:43,820 Configuring a continuous stream from Kafka. So it guides you through it 238 00:12:43,820 --> 00:12:48,110 It does and you can also manage the entire cluster from that console viewing your 239 00:12:48,110 --> 00:12:50,100 data sources monitoring tasks 240 00:12:50,100 --> 00:12:54,080 Checking the health of all your services. So management and prototyping are all in 241 00:12:54,080 --> 00:12:54,420 one place 242 00:12:54,420 --> 00:12:59,410 Exactly, and the console also has a query workbench. You can prototype and refine 243 00:12:59,410 --> 00:13:01,900 your druid SQL queries or native queries 244 00:13:01,900 --> 00:13:05,880 Interactively, it's the perfect little sandbox to see how your data will perform 245 00:13:05,880 --> 00:13:09,100 before you push those queries into your actual application 246 00:13:09,100 --> 00:13:10,980 This has been a fascinating deep dive 247 00:13:10,980 --> 00:13:14,960 so to summarize the key takeaways for you listening Apache druid delivers high 248 00:13:14,960 --> 00:13:19,140 performance real-time analytics by combining two key things first an 249 00:13:19,140 --> 00:13:23,240 Optimized data storage. It's always columnar. It's time indexed and it's highly 250 00:13:23,240 --> 00:13:23,740 compressed 251 00:13:23,740 --> 00:13:28,740 Second it has a stream native elastic architecture that enables that scatter gather 252 00:13:28,740 --> 00:13:32,320 query approach and the result of that combination is just 253 00:13:32,320 --> 00:13:37,220 Massive concurrency and sub second query performance on enormous data sets 254 00:13:37,220 --> 00:13:42,130 and if we connect this back to the bigger picture the whole architecture of druid 255 00:13:42,130 --> 00:13:43,740 prioritizing query on arrival and 256 00:13:43,740 --> 00:13:48,040 millisecond response times it really raises an important question in a world is 257 00:13:48,040 --> 00:13:50,660 demanding instantaneous data high concurrency 258 00:13:50,660 --> 00:13:55,250 Dashboards, how much longer will traditional database architectures the ones that 259 00:13:55,250 --> 00:13:57,480 rely on lengthy ETL processes or pre caching? 260 00:13:57,480 --> 00:14:02,020 How much longer will they remain competitive for operational apps and dynamic ui's 261 00:14:02,020 --> 00:14:04,660 so the very definition of fast enough is changing 262 00:14:04,660 --> 00:14:08,760 It's changing very rapidly a truly provocative thought to mull over if you want to 263 00:14:08,760 --> 00:14:09,980 explore this any further 264 00:14:09,980 --> 00:14:13,960 You can check out the quick start the FAQ and all the documentation through the 265 00:14:13,960 --> 00:14:15,000 Apache druid project 266 00:14:15,000 --> 00:14:19,340 We really encourage you to dive deeper. Thank you for joining us for this deep dive 267 00:14:19,340 --> 00:14:20,660 and once again a huge 268 00:14:20,660 --> 00:14:22,660 Thanks to our sponsor safe server 269 00:14:22,660 --> 00:14:26,120 Remember safe server cares for the hosting of this type of software and supports 270 00:14:26,120 --> 00:14:27,860 you in your digital transformation 271 00:14:27,860 --> 00:14:29,940 You can find more information at 272 00:14:29,940 --> 00:14:32,440 www.safeserver.de