1 00:00:00,000 --> 00:00:01,520 Welcome back to the Deep Dive. 2 00:00:01,520 --> 00:00:04,480 So you asked us to really shortcut the learning curve 3 00:00:04,480 --> 00:00:07,720 on a big topic workflow orchestration. 4 00:00:07,720 --> 00:00:10,840 And specifically, a tool that's been making some waves. 5 00:00:10,840 --> 00:00:11,680 It's called Dagu. 6 00:00:11,680 --> 00:00:13,280 Yeah, Dagu. 7 00:00:13,280 --> 00:00:16,280 And it's known for this surprisingly lightweight 8 00:00:16,280 --> 00:00:16,880 approach. 9 00:00:16,880 --> 00:00:19,940 If you've ever had to manage complex automated processes, 10 00:00:19,940 --> 00:00:21,680 you know the headache we're talking about. 11 00:00:21,680 --> 00:00:22,560 Oh, absolutely. 12 00:00:22,560 --> 00:00:25,560 You've got dozens of different tasks, right? 13 00:00:25,560 --> 00:00:28,400 Python script over here, maybe an old shell script there, 14 00:00:28,400 --> 00:00:30,480 a few remote database backups. 15 00:00:30,480 --> 00:00:33,840 They're all tied together by these fragile implicit 16 00:00:33,840 --> 00:00:37,720 dependencies, scheduled by messy old school cron jobs. 17 00:00:37,720 --> 00:00:39,440 Exactly, and when one of them fails, 18 00:00:39,440 --> 00:00:41,640 figuring out what broke, why it broke, 19 00:00:41,640 --> 00:00:44,040 and which other tasks you have to manually rerun. 20 00:00:44,040 --> 00:00:45,400 It's not debugging at that point. 21 00:00:45,400 --> 00:00:47,320 No, it's what we call an archaeological dig. 22 00:00:47,320 --> 00:00:50,000 You're just sifting through fragmented server logs 23 00:00:50,000 --> 00:00:52,360 and ancient config files. 24 00:00:52,360 --> 00:00:54,920 That pain, that manual dependency tracking, 25 00:00:54,920 --> 00:00:57,560 that is exactly the complexity Dagu 26 00:00:57,560 --> 00:00:59,320 aims to just get rid of. 27 00:00:59,320 --> 00:01:01,800 So our mission today is to take the sources you sent us. 28 00:01:01,800 --> 00:01:03,640 Right, the docs, the comparisons, 29 00:01:03,640 --> 00:01:05,080 community discussions. 30 00:01:05,080 --> 00:01:06,600 And really understand how this tool 31 00:01:06,600 --> 00:01:09,560 can be so powerful for production, 32 00:01:09,560 --> 00:01:12,400 but also simple enough that you can set it up instantly. 33 00:01:12,400 --> 00:01:15,120 It's a really easy entry point into a field that's 34 00:01:15,120 --> 00:01:17,840 usually pretty complex. 35 00:01:17,840 --> 00:01:18,900 It really is. 36 00:01:18,900 --> 00:01:20,520 Now, before we plunge into the details, 37 00:01:20,520 --> 00:01:21,980 just a quick word from our sponsor 38 00:01:21,980 --> 00:01:23,920 who makes all this possible. 39 00:01:23,920 --> 00:01:26,400 This deep dive is supported by Safe Server. 40 00:01:26,400 --> 00:01:28,600 Safe Server handles the hosting of software, 41 00:01:28,600 --> 00:01:31,760 making sure your critical tools are always running smoothly, 42 00:01:31,760 --> 00:01:34,680 and they support you in your digital transformation. 43 00:01:34,680 --> 00:01:39,520 You can find out more at www.safeserver.de. 44 00:01:39,520 --> 00:01:42,240 OK, so let's unpack the foundational idea here. 45 00:01:42,240 --> 00:01:44,400 When we talk about workflow orchestration, 46 00:01:44,400 --> 00:01:46,880 we're really dealing with one core concept. 47 00:01:46,880 --> 00:01:49,440 The directed acyclic graph, the DAG. 48 00:01:49,440 --> 00:01:50,760 Exactly, the DAG. 49 00:01:50,760 --> 00:01:53,220 For anyone learning, you can just think of it as a flowchart. 50 00:01:53,220 --> 00:01:55,640 It's a visual map of all the steps in your process. 51 00:01:55,640 --> 00:01:57,080 And the arrows show the order, right? 52 00:01:57,080 --> 00:01:59,760 Step A has to finish before step B can even start. 53 00:01:59,760 --> 00:02:00,800 Precisely. 54 00:02:00,800 --> 00:02:03,040 The problem with those legacy cron jobs you mentioned 55 00:02:03,040 --> 00:02:04,340 is that the DAG is implicit. 56 00:02:04,340 --> 00:02:07,400 It's just in your head or buried in scripts. 57 00:02:07,400 --> 00:02:10,040 So DAGU makes you define it explicitly. 58 00:02:10,040 --> 00:02:11,880 It forces you to. 59 00:02:11,880 --> 00:02:14,240 But here's the key differentiator. 60 00:02:14,240 --> 00:02:17,080 DAGU is designed for systems where you already 61 00:02:17,080 --> 00:02:18,720 have these complex jobs running. 62 00:02:18,720 --> 00:02:21,140 Maybe in Perl or Shell script. 63 00:02:21,140 --> 00:02:22,680 Or some ancient version of Java. 64 00:02:22,680 --> 00:02:23,680 No. 65 00:02:23,680 --> 00:02:25,480 It lets you orchestrate them without making 66 00:02:25,480 --> 00:02:26,880 you rewrite everything. 67 00:02:26,880 --> 00:02:31,450 And more importantly, without forcing you to define the DAG in a language like 68 00:02:31,450 --> 00:02:32,240 Python, 69 00:02:32,240 --> 00:02:34,080 which a lot of the bigger tools require. 70 00:02:34,080 --> 00:02:36,320 So it's configuration, not coding. 71 00:02:36,320 --> 00:02:37,480 That's the perfect way to put it. 72 00:02:37,480 --> 00:02:41,160 That leads us right into the simplicity factor, which honestly is pretty 73 00:02:41,160 --> 00:02:42,280 astonishing. 74 00:02:42,280 --> 00:02:45,560 Most of these tools, they demand so much infrastructure just to get started. 75 00:02:45,560 --> 00:02:49,600 A huge external database, multiple worker services. 76 00:02:49,600 --> 00:02:51,740 Configuration files spread across five different folders. 77 00:02:51,740 --> 00:02:53,040 It's a lot. 78 00:02:53,040 --> 00:02:55,840 Dagoo promises what they call instant setup. 79 00:02:55,840 --> 00:02:57,200 And being air-gapped ready. 80 00:02:57,200 --> 00:02:58,200 Yes. 81 00:02:58,200 --> 00:03:01,800 And the core of that promise is the single binary advantage. 82 00:03:01,800 --> 00:03:04,840 You install it by just placing one executable file. 83 00:03:04,840 --> 00:03:05,840 That's it. 84 00:03:05,840 --> 00:03:06,840 And it runs instantly. 85 00:03:06,840 --> 00:03:10,200 It doesn't need an external database or any specific cloud service. 86 00:03:10,200 --> 00:03:13,090 For the learner, this means you can try it out and have a fully working system in 87 00:03:13,090 --> 00:03:13,680 minutes. 88 00:03:13,680 --> 00:03:14,880 Even on a laptop. 89 00:03:14,880 --> 00:03:17,960 Even on a laptop or in some isolated test environment. 90 00:03:17,960 --> 00:03:21,800 The setup is literally a simple curl command to download the binary. 91 00:03:21,800 --> 00:03:23,680 And you type dagustartl. 92 00:03:23,680 --> 00:03:24,680 And you're done. 93 00:03:24,680 --> 00:03:25,680 And you're done. 94 00:03:25,680 --> 00:03:29,120 The web UI is running, usually at localhost.a0a0. 95 00:03:29,120 --> 00:03:32,420 That zero dependency approach is, well, it's profound. 96 00:03:32,420 --> 00:03:35,280 It cuts down on the operational overhead, I imagine. 97 00:03:35,280 --> 00:03:36,280 Hugely. 98 00:03:36,280 --> 00:03:40,480 No database connection issues, no complex security groups to configure. 99 00:03:40,480 --> 00:03:43,160 The whole architecture is just concise. 100 00:03:43,160 --> 00:03:47,840 Workflows are in files, logs are structured files, history is stored in JSON files. 101 00:03:47,840 --> 00:03:50,560 Okay, so let me bring in some critical thinking here, because that raises a 102 00:03:50,560 --> 00:03:51,720 fascinating question. 103 00:03:51,720 --> 00:03:58,840 If it's all file-based storage, you know, YAML, JSON, doesn't that risk performance 104 00:03:58,840 --> 00:03:58,960 or 105 00:03:58,960 --> 00:04:03,780 reliability compare to, say, a dedicated database like Postgresql? 106 00:04:03,780 --> 00:04:06,850 That is an excellent point, and it cuts right to the philosophical trade-off that 107 00:04:06,850 --> 00:04:07,120 Dagoo 108 00:04:07,120 --> 00:04:08,120 has made. 109 00:04:08,120 --> 00:04:09,120 Okay. 110 00:04:09,120 --> 00:04:10,120 And you're right. 111 00:04:10,120 --> 00:04:13,490 For pure, massive-scale data analytics where you need to run complex SQL queries on 112 00:04:13,490 --> 00:04:13,960 billions 113 00:04:13,960 --> 00:04:16,520 of records, a real database is better. 114 00:04:16,520 --> 00:04:17,520 No question. 115 00:04:17,520 --> 00:04:18,560 But Dagoo's not for that. 116 00:04:18,560 --> 00:04:20,800 It's targeting a different pain point. 117 00:04:20,800 --> 00:04:25,290 It's for migrating away from those legacy cron systems, where you're dealing with 118 00:04:25,290 --> 00:04:25,800 hundreds 119 00:04:25,800 --> 00:04:29,080 or maybe thousands of runs a day, not millions per hour. 120 00:04:29,080 --> 00:04:30,080 I see. 121 00:04:30,080 --> 00:04:33,810 By using file-based storage, they get rid of the single biggest complexity in 122 00:04:33,810 --> 00:04:34,340 security 123 00:04:34,340 --> 00:04:38,480 headache in setting up enterprise software, the database. 124 00:04:38,480 --> 00:04:42,600 They're trading that hyperscale querying for operational simplicity. 125 00:04:42,600 --> 00:04:46,920 And for most people coming from just checking logs with SSH, it's a huge step up. 126 00:04:46,920 --> 00:04:51,080 A massive step up with zero operational management overhead. 127 00:04:51,080 --> 00:04:54,500 And that trade-off is often worth it for teams that just want to move fast. 128 00:04:54,500 --> 00:04:59,000 So if the setup is instant, the next step is obviously defining the workflows. 129 00:04:59,000 --> 00:05:02,500 How do you get those messy cron jobs into DAGO? 130 00:05:02,500 --> 00:05:05,280 Well that brings us to what they call universal execution. 131 00:05:05,280 --> 00:05:07,140 And it's all defined in simple YAML. 132 00:05:07,140 --> 00:05:09,200 The interaction is really declarative. 133 00:05:09,200 --> 00:05:11,760 You're not writing boilerplate code in Python. 134 00:05:11,760 --> 00:05:14,060 You're just defining your pipeline in YAML. 135 00:05:14,060 --> 00:05:16,840 Which stands for yet another Markov language. 136 00:05:16,840 --> 00:05:17,840 It's incredibly readable. 137 00:05:17,840 --> 00:05:21,140 Even if you never code it, you can pretty much figure out what the file is telling 138 00:05:21,140 --> 00:05:21,360 DAGO 139 00:05:21,360 --> 00:05:22,360 to do. 140 00:05:22,360 --> 00:05:23,360 So let's walk through it. 141 00:05:23,360 --> 00:05:24,360 You start with the schedule. 142 00:05:24,360 --> 00:05:25,360 Yeah. 143 00:05:25,360 --> 00:05:26,360 You start with the schedule. 144 00:05:26,360 --> 00:05:29,000 You use a standard cron expression, which is just a common way to set recurring 145 00:05:29,000 --> 00:05:29,520 times. 146 00:05:29,520 --> 00:05:32,600 Something like 000 for midnight daily. 147 00:05:32,600 --> 00:05:33,600 Simple enough. 148 00:05:33,600 --> 00:05:36,200 Then you just define your steps by name and the command you want to run. 149 00:05:36,200 --> 00:05:40,760 And what's really powerful here is that the same simple YAML structure can handle 150 00:05:40,760 --> 00:05:41,360 completely 151 00:05:41,360 --> 00:05:42,980 different kinds of tasks. 152 00:05:42,980 --> 00:05:44,760 This universal execution thing. 153 00:05:44,760 --> 00:05:45,760 Exactly. 154 00:05:45,760 --> 00:05:48,840 One step might be a simple local Python script. 155 00:05:48,840 --> 00:05:53,200 The command is just command.python dataextract.py. 156 00:05:53,200 --> 00:05:54,800 Dagu runs that on the host machine. 157 00:05:54,800 --> 00:05:55,800 Okay. 158 00:05:55,800 --> 00:05:58,960 But then the very next step in the same file could be a remote command, right? 159 00:05:58,960 --> 00:05:59,960 Over SSH. 160 00:05:59,960 --> 00:06:00,960 Yep. 161 00:06:00,960 --> 00:06:02,360 You just add executor.sasha. 162 00:06:02,360 --> 00:06:07,240 And now Dagu is telling a distant server to run, say, command.backupdatabase.ash. 163 00:06:07,240 --> 00:06:08,240 Wow. 164 00:06:08,240 --> 00:06:10,560 And then you could have a task that needs total isolation. 165 00:06:10,560 --> 00:06:11,560 Right. 166 00:06:11,560 --> 00:06:15,360 Instead of running on the server, you can tell Dagu to use the Docker executor. 167 00:06:15,360 --> 00:06:22,320 So you just add executor.docker with a command like python.3.11, pythonprocess.py. 168 00:06:22,320 --> 00:06:27,600 And by doing that, you're telling Dagu to spin up a totally clean, isolated Python 169 00:06:27,600 --> 00:06:28,280 environment 170 00:06:28,280 --> 00:06:32,840 just for that script, run it, get the result, and then tear the whole thing down. 171 00:06:32,840 --> 00:06:37,330 So that unifies shell scripts, remote servers, and containers into one readable 172 00:06:37,330 --> 00:06:37,800 file. 173 00:06:37,800 --> 00:06:38,800 That's a huge deal. 174 00:06:38,800 --> 00:06:39,800 It is. 175 00:06:39,800 --> 00:06:43,920 And what's particularly fascinating, and this was a recent game changer version 1.0.2.3, 176 00:06:43,920 --> 00:06:45,680 is the GitHub Actions Executor. 177 00:06:45,680 --> 00:06:46,840 OK, tell me about that. 178 00:06:46,840 --> 00:06:48,520 Well, think about the ecosystem. 179 00:06:48,520 --> 00:06:52,760 There are over 20,000 GitHub Actions available for everything, from checking your 180 00:06:52,760 --> 00:06:53,040 code to 181 00:06:53,040 --> 00:06:54,560 deploying infrastructure. 182 00:06:54,560 --> 00:06:57,040 And this executor lets you run them in Dagu? 183 00:06:57,040 --> 00:07:02,080 Any of them, locally, without having to spin up a full CI, CD platform. 184 00:07:02,080 --> 00:07:05,200 It's a massive shortcut for testing and local automation. 185 00:07:05,200 --> 00:07:08,840 You're basically bringing the power of the cloud's automation ecosystem down to 186 00:07:08,840 --> 00:07:08,960 your 187 00:07:08,960 --> 00:07:10,480 server or laptop. 188 00:07:10,480 --> 00:07:12,320 All managed through that simple YAML. 189 00:07:12,320 --> 00:07:13,580 And it's not just code, right? 190 00:07:13,580 --> 00:07:14,720 We saw other executors. 191 00:07:14,720 --> 00:07:18,520 Yeah, like HTTP for making API calls in a sequence. 192 00:07:18,520 --> 00:07:22,240 And even JQ for doing advanced JSON processing right inside the workflow. 193 00:07:22,240 --> 00:07:27,000 So it really is a single control plane for code, infrastructure, and data. 194 00:07:27,000 --> 00:07:28,000 That's the goal. 195 00:07:28,000 --> 00:07:29,920 OK, so we know it's lightweight. 196 00:07:29,920 --> 00:07:31,920 We know it's simple to define workflows. 197 00:07:31,920 --> 00:07:37,560 But we have to ask, can this little single binary really stand up to a production 198 00:07:37,560 --> 00:07:38,480 environment? 199 00:07:38,480 --> 00:07:41,880 For the learner looking to adopt this, that's the big question. 200 00:07:41,880 --> 00:07:42,880 Absolutely. 201 00:07:42,880 --> 00:07:47,430 So the resources are clear, it is packed with production ready features, and it 202 00:07:47,430 --> 00:07:47,760 manages 203 00:07:47,760 --> 00:07:51,240 the common headaches right out of the box, starting with resilience. 204 00:07:51,240 --> 00:07:52,880 You mean error handling? 205 00:07:52,880 --> 00:07:53,880 Exactly. 206 00:07:53,880 --> 00:08:00,120 If a task fails because of some temporary network glitch, Doggoo handles automatic 207 00:08:00,120 --> 00:08:00,940 retries. 208 00:08:00,940 --> 00:08:04,800 But it does it smartly, using something called exponential backoff. 209 00:08:04,800 --> 00:08:05,800 Explain that. 210 00:08:05,800 --> 00:08:06,800 What is exponential backoff? 211 00:08:06,800 --> 00:08:09,360 It just means Doggoo doesn't just try again immediately. 212 00:08:09,360 --> 00:08:12,920 It waits a bit after the first failure, then waits a lot longer after the second, 213 00:08:12,920 --> 00:08:13,280 and so 214 00:08:13,280 --> 00:08:14,280 on. 215 00:08:14,280 --> 00:08:17,080 Ah, so it's not hammering a system that might already be struggling. 216 00:08:17,080 --> 00:08:18,080 Precisely. 217 00:08:18,080 --> 00:08:21,170 It gives the external system time to recover, which dramatically improves the 218 00:08:21,170 --> 00:08:21,640 stability 219 00:08:21,640 --> 00:08:22,640 of the whole workflow. 220 00:08:22,640 --> 00:08:23,880 OK, so that's resilience. 221 00:08:23,880 --> 00:08:25,100 What about scaling? 222 00:08:25,100 --> 00:08:28,720 How does a single binary handle running jobs across multiple machines? 223 00:08:28,720 --> 00:08:32,040 Right, they mention distributed execution and queue management. 224 00:08:32,040 --> 00:08:35,760 The way it works is pretty clever, and it stays true to that lightweight design. 225 00:08:35,760 --> 00:08:38,320 No central database to coordinate things. 226 00:08:38,320 --> 00:08:39,320 Nope. 227 00:08:39,320 --> 00:08:43,810 The MagU instances can coordinate just by sharing a persistent file system, like a 228 00:08:43,810 --> 00:08:44,360 network 229 00:08:44,360 --> 00:08:45,360 share. 230 00:08:45,360 --> 00:08:49,200 They use that shared space to sync up their state and manage the queues. 231 00:08:49,200 --> 00:08:52,400 Which lets you control how many jobs can run at the same time. 232 00:08:52,400 --> 00:08:53,400 Exactly. 233 00:08:53,400 --> 00:08:58,120 So you can scale out your execution power without having to scale up a big complex 234 00:08:58,120 --> 00:08:58,880 database. 235 00:08:58,880 --> 00:09:01,440 And for organizing things as they get more complex. 236 00:09:01,440 --> 00:09:03,720 I love the nested workflows feature. 237 00:09:03,720 --> 00:09:06,820 It's basically like creating functions for your pipelines. 238 00:09:06,820 --> 00:09:11,990 You can define a small, reusable, daisy, like a data cleanup process, and they just 239 00:09:11,990 --> 00:09:12,040 call 240 00:09:12,040 --> 00:09:14,800 it as a single step inside a much bigger workflow. 241 00:09:14,800 --> 00:09:15,800 Keeps things tidy. 242 00:09:15,800 --> 00:09:16,800 Very tidy. 243 00:09:16,800 --> 00:09:18,360 And they also have conditional steps. 244 00:09:18,360 --> 00:09:22,740 So a task will only run if a certain condition is met, maybe based on the output of 245 00:09:22,740 --> 00:09:23,360 a previous 246 00:09:23,360 --> 00:09:24,360 task. 247 00:09:24,360 --> 00:09:25,800 It becomes a truly dynamic pipeline. 248 00:09:25,800 --> 00:09:29,340 We also saw some enterprise-grade features for scheduling and security. 249 00:09:29,340 --> 00:09:30,340 We did. 250 00:09:30,340 --> 00:09:33,860 The advanced scheduler is important because it's not just tied to the server's 251 00:09:33,860 --> 00:09:34,720 local time. 252 00:09:34,720 --> 00:09:36,280 It supports time zone awareness. 253 00:09:36,280 --> 00:09:37,940 With the CRNTZ variable. 254 00:09:37,940 --> 00:09:38,940 Right. 255 00:09:38,940 --> 00:09:43,530 So your server can be in London, but your process can kick off at 3 a.m. New York 256 00:09:43,530 --> 00:09:44,200 time. 257 00:09:44,200 --> 00:09:46,500 And Dagu handles that perfectly. 258 00:09:46,500 --> 00:09:49,400 And for security in a corporate environment. 259 00:09:49,400 --> 00:09:55,100 They support basic auth and, more importantly, OIDC authentication. 260 00:09:55,100 --> 00:09:56,340 OpenID Connect. 261 00:09:56,340 --> 00:09:57,720 That's the modern standard. 262 00:09:57,720 --> 00:09:58,720 It is. 263 00:09:58,720 --> 00:10:02,920 It lets you use your company's existing sign-on system to secure the web UI, the 264 00:10:02,920 --> 00:10:04,040 logs, everything. 265 00:10:04,040 --> 00:10:08,040 Looking at their roadmap, it seems like they're really committed to maturing this. 266 00:10:08,040 --> 00:10:09,040 Yeah. 267 00:10:09,040 --> 00:10:10,040 That underscores it. 268 00:10:10,040 --> 00:10:11,920 They're prioritizing key enterprise needs. 269 00:10:11,920 --> 00:10:15,050 Things like human-in-the-loop approvals, where a workflow literally pauses until a 270 00:10:15,050 --> 00:10:15,400 person 271 00:10:15,400 --> 00:10:16,600 clicks approve. 272 00:10:16,600 --> 00:10:18,520 And robust secret management. 273 00:10:18,520 --> 00:10:19,520 Which is critical. 274 00:10:19,520 --> 00:10:23,040 Integrating with tools like CAMS or Vault so that you never have to put passwords 275 00:10:23,040 --> 00:10:23,680 or API 276 00:10:23,680 --> 00:10:25,600 keys directly in your workflow files. 277 00:10:25,600 --> 00:10:29,190 It shows they're serious about enterprise use cases, even with this simple 278 00:10:29,190 --> 00:10:30,120 architecture. 279 00:10:30,120 --> 00:10:34,610 So we started this deep dive with that familiar frustration of legacy scheduling, 280 00:10:34,610 --> 00:10:35,200 you know, 281 00:10:35,200 --> 00:10:41,200 the implicit dependencies, the fragmented logs, chasing down failed cron jobs. 282 00:10:41,200 --> 00:10:42,200 Archaeological dig. 283 00:10:42,200 --> 00:10:43,520 The dig, yeah. 284 00:10:43,520 --> 00:10:48,510 And we found Dagu a really compelling, lightweight solution defined in readable 285 00:10:48,510 --> 00:10:50,120 declarative YAML. 286 00:10:50,120 --> 00:10:53,190 Deployable as a single binary, but it can orchestrate remote commands, local 287 00:10:53,190 --> 00:10:53,720 scripts, 288 00:10:53,720 --> 00:10:54,780 Docker containers. 289 00:10:54,780 --> 00:10:58,080 And even that massive library of GitHub actions. 290 00:10:58,080 --> 00:11:02,470 The key takeaway for you, the learner, has to be the value of declarative 291 00:11:02,470 --> 00:11:03,560 configuration. 292 00:11:03,560 --> 00:11:07,560 Dagu just reduces the cognitive load so much. 293 00:11:07,560 --> 00:11:11,260 You manage complex systems with config files, not boilerplate code. 294 00:11:11,260 --> 00:11:15,530 And that simple YAML translates directly into better visualization and much, much 295 00:11:15,530 --> 00:11:15,960 easier 296 00:11:15,960 --> 00:11:17,400 long-term maintenance. 297 00:11:17,400 --> 00:11:20,800 The developers were asked directly, why not just use something like Airflow? 298 00:11:20,800 --> 00:11:21,800 Right. 299 00:11:21,800 --> 00:11:24,340 And their answer really reveals Dagu's core strength. 300 00:11:24,340 --> 00:11:28,270 It's built to take your existing programs and scripts and orchestrate them without 301 00:11:28,270 --> 00:11:28,520 you 302 00:11:28,520 --> 00:11:29,840 needing to modify them. 303 00:11:29,840 --> 00:11:32,470 So if you have a working Python script, you don't need to wrap it in a bunch of 304 00:11:32,470 --> 00:11:33,360 framework-specific 305 00:11:33,360 --> 00:11:35,120 code just to schedule it. 306 00:11:35,120 --> 00:11:37,320 You just point Dagu's executor at it. 307 00:11:37,320 --> 00:11:41,080 That incredibly low barrier to adoption is what really sets it apart. 308 00:11:41,080 --> 00:11:43,600 So here's a final thought for you to explore. 309 00:11:43,600 --> 00:11:47,400 Consider a complex multi-server process in your own work. 310 00:11:47,400 --> 00:11:50,280 Right now you might be managing it with a bunch of different server logs and manual 311 00:11:50,280 --> 00:11:51,280 checks. 312 00:11:51,280 --> 00:11:55,540 But much simpler, how much more reliable would that be if the entire pipeline, the 313 00:11:55,540 --> 00:11:56,240 dependencies, 314 00:11:56,240 --> 00:12:02,210 the status, the logs, was all visualized as a single explicit D accessible right 315 00:12:02,210 --> 00:12:02,600 from 316 00:12:02,600 --> 00:12:03,600 a web browser? 317 00:12:03,600 --> 00:12:07,120 Instead of being buried and fragmented across half a dozen different screens in 318 00:12:07,120 --> 00:12:08,120 server terminals. 319 00:12:08,120 --> 00:12:12,520 That vision of centralized control over complexity is definitely food for thought. 320 00:12:12,520 --> 00:12:15,180 Thank you for joining us for this deep dive into Dagu. 321 00:12:15,180 --> 00:12:18,700 We hope you feel equipped to tackle workflow orchestration with a newfound 322 00:12:18,700 --> 00:12:19,440 appreciation 323 00:12:19,440 --> 00:12:21,160 for lightweight power. 324 00:12:21,160 --> 00:12:25,090 And thank you again to our sponsor, SafeServer, who supports the hosting of this 325 00:12:25,090 --> 00:12:25,520 deep dive 326 00:12:25,520 --> 00:12:28,040 and all your digital transformation needs. 327 00:12:28,040 --> 00:12:32,800 You can learn more about how they can help you at www.safeserver.de. 328 00:12:32,800 --> 00:12:34,040 We'll catch you on the next deep dive.