https://silbernagel.dev/images/avatar-ce749fd4bf14a0772c2e968907ce689e.jpg?vsn=dMatt Silbernagelmatt@silbernagel.devhttps://silbernagel.dev/postsSilbernagel Dev RSS2024-03-29T00:50:47.077592Z<p>
Today I released a new open-source project, Beamring. It’s a simple implementation of a webring built on the BEAM for the BEAM community.</p> <ul> <li>
<a href="https://beamring.io">The Beamring site</a> </li>
<li>
<a href="https://en.wikipedia.org/wiki/Webring">Introduction to Webrings</a> </li>
<li>
<a href="https://github.com/silbermm/beamring">The Source code</a> </li> </ul> <h2> Why?</h2> <p> Before there was reliable search engines or big social media networks, discovery on the web was hard. Webrings were formed as a way to link similar content sites together and provided a way for people to find the content they wanted. </p> <p> As the walled gardens (Facebook, Twitter and Reddit) are falling apart and big search engines are less and less trustworthy, the Indieweb has a chance to flourish and webrings could prove to be useful once again. </p> <p> <a href="https://indieweb.org/">The Indieweb</a></p> <h2> Can my site be added?</h2> <p> I’m glad you asked!</p> <p> If you have a site that you feel should be added to Beamring, there are two steps to take.</p> <h3> Step One</h3> <p> First you’ll need to add a small code snippet to your site, something similar to the following:</p> <pre><code class="makeup html"><span class="p" data-group-id="6121222312-1"><</span><span class="k">div</span><span class="p" data-group-id="6121222312-1">></span><span class="s"> </span><span class="p" data-group-id="6121222312-2"><</span><span class="k">p</span><span class="p" data-group-id="6121222312-2">></span><span class="s">
</span><span class="p" data-group-id="6121222312-3"><</span><span class="k">a</span><span class="w"> </span><span class="na">href</span><span class="o">=</span><span class="s">"https://beamring.io/previous?host</span><span class="o">=</span><span class="s">https://yoursite.com"</span><span class="p" data-group-id="6121222312-3">></span><span class="s">←</span><span class="p" data-group-id="6121222312-4"></</span><span class="k">a</span><span class="p" data-group-id="6121222312-4">></span><span class="s">
</span><span class="p" data-group-id="6121222312-5"><</span><span class="k">a</span><span class="w"> </span><span class="na">href</span><span class="o">=</span><span class="s">"https://beamring.io"</span><span class="p" data-group-id="6121222312-5">></span><span class="s">Beamring</span><span class="p" data-group-id="6121222312-6"></</span><span class="k">a</span><span class="p" data-group-id="6121222312-6">></span><span class="s">
</span><span class="p" data-group-id="6121222312-7"><</span><span class="k">a</span><span class="w"> </span><span class="na">href</span><span class="o">=</span><span class="s">"https://beamring.io/next?host</span><span class="o">=</span><span class="s">https://yoursite.com"</span><span class="p" data-group-id="6121222312-7">></span><span class="s">→</span><span class="p" data-group-id="6121222312-8"></</span><span class="k">a</span><span class="p" data-group-id="6121222312-8">></span><span class="s">
</span><span class="p" data-group-id="6121222312-9"></</span><span class="k">p</span><span class="p" data-group-id="6121222312-9">></span><span class="s">
</span><span class="p" data-group-id="6121222312-10"></</span><span class="k">div</span><span class="p" data-group-id="6121222312-10">></span></code></pre> <p> This is what makes the webring work. Each site that is part of the ring does this and the links will redirect the visitor to the next or previous site in the ring.</p> <h3> Step Two</h3> <p> <a href="https://github.com/silbermm/beamring/issues/new?assignees=silbermm&labels=new&projects=&template=add_site.yml&title=%5BAdd%5D%3A+">Fill out this github issue</a></p> <p> Use that link to request being added and once I’ve validated that the above markup is on your site, you’ll be added.</p> <h2> Whats Next</h2> <p> I don’t have much else planned for this tiny project, but am always happy to take feature requests. </p> <ul> <li>
<a href="https://github.com/silbermm/beamring/issues">Create a github issue for feature requests</a> </li> </ul> <p> <a href="https://indieweb.org/Getting_Started">Go forth and explore the Indieweb</a></p> <p> <a href="https://fed.brid.gy/"></a></p></p>
Matt Silbernagelhttps://silbernagel.dev/posts/introducing-beamringIntroducing Beamring2023-07-08T00:50:47.077618Z<p>
If you have an elixir app running on Fly.io, it’s easy to send telemetry data to GrafanaCloud.</p> <p> Lets start with Logs.</p> <h2> Logs</h2> <p> Shipping logs to GrafanaCloud requires setup and deployment of another app called LogShipper.</p> <p> <a href="https://fly.io/docs/going-to-production/monitoring/exporting-logs/">LogShipper Documentation</a></p> <h3> Setting up LogShipper</h3> <ul> <li>
Create a new directory <pre><code>mkdir logshipper</code></pre>
</li>
<li>
Create a new app, but don’t launch yet <pre><code>fly launch --no-deploy --image ghcr.io/superfly/fly-log-shipper:latest`</code></pre>
</li>
<li>
Configure your org and access token <pre><code>fly secrets set ORG=personal
fly secrets set ACCESS_TOKEN=$(fly auth token)</code></pre>
</li>
<li>
Configure your Loki credentials. Find these in the GrafanaCloud Portal, Loki section. Hint: you may have to generate an API key. <pre><code>fly secrets set LOKI_URL=
fly secrets set LOKI_USERNAME=
fly secrets set LOKI_PASSWORD=</code></pre>
</li>
<li>
Add this to the newly generated fly.toml file: <pre><code>[[services]]
http_checks = []
internal_port = 8686</code></pre>
</li>
<li>
Deploy <pre><code>flyctl deploy</code></pre>
</li> </ul> <p> Once deployed, this should start sending all the logs from all your fly apps in the configured organization to GrafanaCloud.</p> <p> <a href="https://github.com/superfly/fly-log-shipper#provider-configuration">Find all configuration options for the LogShipper app in the repository</a></p> <p> Also, I’d recommend shipping your logs in JSON format using something like the <code class="inline">logger_json</code> library. </p> <p> <a href="https://hex.pm/packages/logger_json">The <code class="inline">logger_json</code> library</a></p> <h2> Traces</h2> <p> To send Traces to GrafanaCloud Tempo, first add the OpenTelemetry libraries. Depending on your needs, you may leave off some of these.</p> <pre><code class="makeup elixir"><span class="c1"># ./mix.exs</span><span class="w"> </span><span class="kd">defp</span><span class="w"> </span><span class="nf">deps</span><span class="w"> </span><span class="k" data-group-id="1682204032-1">do</span><span class="w">
</span><span class="p" data-group-id="1682204032-2">[</span><span class="w">
</span><span class="n">...</span><span class="w">
</span><span class="p" data-group-id="1682204032-3">{</span><span class="ss">:opentelemetry_exporter</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0"</span><span class="p" data-group-id="1682204032-3">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-4">{</span><span class="ss">:opentelemetry</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0"</span><span class="p" data-group-id="1682204032-4">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-5">{</span><span class="ss">:opentelemetry_api</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0"</span><span class="p" data-group-id="1682204032-5">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-6">{</span><span class="ss">:opentelemetry_ecto</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0"</span><span class="p" data-group-id="1682204032-6">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-7">{</span><span class="ss">:opentelemetry_liveview</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0.0-rc.4"</span><span class="p" data-group-id="1682204032-7">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-8">{</span><span class="ss">:opentelemetry_phoenix</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 1.0"</span><span class="p" data-group-id="1682204032-8">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="1682204032-9">{</span><span class="ss">:opentelemetry_cowboy</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 0.2"</span><span class="p" data-group-id="1682204032-9">}</span><span class="w">
</span><span class="p" data-group-id="1682204032-2">]</span><span class="w">
</span><span class="k" data-group-id="1682204032-1">end</span></code></pre> <p> Now add some configuration to the config/runtime.exs file.</p> <pre><code class="makeup elixir"><span class="c1"># ./config/runtime.exs</span><span class="w"> </span><span class="k">if</span><span class="w"> </span><span class="n">config_env</span><span class="p" data-group-id="8446938097-1">(</span><span class="p" data-group-id="8446938097-1">)</span><span class="w"> </span><span class="o">==</span><span class="w"> </span><span class="ss">:prod</span><span class="w"> </span><span class="k" data-group-id="8446938097-2">do</span><span class="w">
</span><span class="n">otel_auth</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nc">System</span><span class="o">.</span><span class="n">get_env</span><span class="p" data-group-id="8446938097-3">(</span><span class="s">"OTEL_AUTH"</span><span class="p" data-group-id="8446938097-3">)</span><span class="w"> </span><span class="o">||</span><span class="w">
</span><span class="k">raise</span><span class="w"> </span><span class="s">"""
OTEL_AUTH is a required variable
"""</span><span class="w">
</span><span class="n">config</span><span class="w"> </span><span class="ss">:opentelemetry_exporter</span><span class="p">,</span><span class="w">
</span><span class="ss">otlp_protocol</span><span class="p">:</span><span class="w"> </span><span class="ss">:grpc</span><span class="p">,</span><span class="w">
</span><span class="ss">otlp_traces_endpoint</span><span class="p">:</span><span class="w"> </span><span class="nc">System</span><span class="o">.</span><span class="n">fetch_env!</span><span class="p" data-group-id="8446938097-4">(</span><span class="s">"OTLP_ENDPOINT"</span><span class="p" data-group-id="8446938097-4">)</span><span class="p">,</span><span class="w">
</span><span class="ss">otlp_headers</span><span class="p">:</span><span class="w"> </span><span class="p" data-group-id="8446938097-5">[</span><span class="p" data-group-id="8446938097-6">{</span><span class="s">"Authorization"</span><span class="p">,</span><span class="w"> </span><span class="s">"Basic </span><span class="si" data-group-id="8446938097-7">#{</span><span class="n">otel_auth</span><span class="si" data-group-id="8446938097-7">}</span><span class="s">"</span><span class="p" data-group-id="8446938097-6">}</span><span class="p" data-group-id="8446938097-5">]</span><span class="w">
</span><span class="k" data-group-id="8446938097-2">end</span></code></pre> <p> Next, setup the environment variables.</p> <ul> <li>
The value required for <code class="inline">OTLP_ENDPOINT</code> can be found on the GrafanaCloud Portal Tempo section, and will look something like: <code class="inline">https://tempo-us-central1.grafana.net/tempo</code> (your URL may differ) </li>
<li>
The value for <code class="inline">OTEL_AUTH</code> is a base64 encoded value of <code class="inline">{username}:{api token}</code>. <pre><code>echo -n 'username:password' | base64`</code></pre>
(replace username and password with the actual values) </li>
<li>
And you’ll need the data source name (found on the same GrafanaCloud Portal page) which will be used to set the value of <code class="inline">OTEL_RESOURCE_ATTRIBUTES</code> </li> </ul> <p> All of these values can be set with one command:</p>
<pre><code>flyctl secrets set OTLP_ENDPOINT=https://your_endpoint OTEL_RESOURCE_ATTRIBUTES=your_datasource_name OTEL_AUTH=your_base64_encoded_string</code></pre> <p> After setting these values and deploying the application, traces should start showing in Grafana!</p> <h2> Metrics</h2> <p> For Metrics, I really like to use Prometheus and I find the easiest way to get started with Prometheus is using prom_ex. PromEx provides excellent documentation and is worth reading, but a quick guide to get it working:</p> <p> <a href="https://prometheus.io/docs/introduction/overview/">Prometheus Overview</a></p> <p> <a href="https://hexdocs.pm/prom_ex/readme.html">Documentation for prom_ex</a></p> <ul> <li>
Add <code class="inline">:prom_ex, "~> 1.8"</code> to your dependencies in <code class="inline">mix.exs</code> and run <code class="inline">mix deps.get</code> </li>
<li>
Run the generator <pre><code>mix prom_ex.gen.config --datasource curl`</code></pre>
</li>
<li>
Add configuration in <code class="inline">config.exs</code> for the metrics server <pre><code class="makeup elixir"><span class="n">config</span><span class="w"> </span><span class="ss">:your_app</span><span class="p">,</span><span class="w"> </span><span class="nc">YourApp.PromEx</span><span class="p">,</span><span class="w">
</span><span class="ss">metrics_server</span><span class="p">:</span><span class="w"> </span><span class="p" data-group-id="6233375357-1">[</span><span class="w">
</span><span class="ss">port</span><span class="p">:</span><span class="w"> </span><span class="nc">System</span><span class="o">.</span><span class="n">get_env</span><span class="p" data-group-id="6233375357-2">(</span><span class="s">"PROM_PORT"</span><span class="p" data-group-id="6233375357-2">)</span><span class="w"> </span><span class="o">||</span><span class="w"> </span><span class="mi">9091</span><span class="p">,</span><span class="w">
</span><span class="ss">path</span><span class="p">:</span><span class="w"> </span><span class="s">"/metrics"</span><span class="p">,</span><span class="w">
</span><span class="ss">protocol</span><span class="p">:</span><span class="w"> </span><span class="ss">:http</span><span class="p">,</span><span class="w">
</span><span class="ss">pool_size</span><span class="p">:</span><span class="w"> </span><span class="mi">5</span><span class="w">
</span><span class="p" data-group-id="6233375357-1">]</span></code></pre>
</li>
<li>
Add <code class="inline">YourApp.PromEx</code> to your supervision tree in <code class="inline">application.ex</code> <pre><code class="makeup elixir"><span class="kd">def</span><span class="w"> </span><span class="nf">start</span><span class="p" data-group-id="6412650651-1">(</span><span class="c">_type</span><span class="p">,</span><span class="w"> </span><span class="c">_args</span><span class="p" data-group-id="6412650651-1">)</span><span class="w"> </span><span class="k" data-group-id="6412650651-2">do</span><span class="w">
</span><span class="n">children</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p" data-group-id="6412650651-3">[</span><span class="w">
</span><span class="nc">YourAppWeb.Endpoint</span><span class="p">,</span><span class="w">
</span><span class="c1"># PromEx should be started after the Endpoint, to avoid unnecessary error messages</span><span class="w">
</span><span class="nc">YourApp.PromEx</span><span class="p">,</span><span class="w">
</span><span class="n">...</span><span class="w">
</span><span class="p" data-group-id="6412650651-3">]</span></code></pre>
</li>
<li>
Lastly, uncomment any desired plugins in the generated <code class="inline">YourApp.PromEx</code> file. </li> </ul> <p> Now running the app exposes the metrics at <code class="inline">localhost:9091/metrics</code>.</p> <p> Next, expose the metrics so that Fly can scrape them. As documented by Fly, just add the following to your fly.toml file:</p> <p> <a href="https://fly.io/docs/reference/metrics/#configuration">Fly documentation for metrics</a></p> <pre><code class="toml">[metrics] port = 9091
path = "/metrics"</code></pre> <p> Setup the Prometheus data source in GrafanaCloud with the following properties:</p> <ul> <li>
HTTP -> URL “<a href="https://api.fly.io/prometheus/">https://api.fly.io/prometheus/</a><org-slug>/“ (replace <code class="inline">org_slug</code> with your org) </li>
<li>
Custom HTTP Headers -> + Add Header: </li>
<li>
Header: Authorization, Value: Bearer <token> (replace <code class="inline">token</code> with the result of <code class="inline">flyctl auth token</code> </li> </ul> <p> You should now see fly metrics and <code class="inline">prom_ex</code> defined metrics in GrafanaCloud.</p> <h2> Wrap up</h2> <p> I wrote this because I wanted to add observability to my Elixir/Phoenix apps that run on Fly and finding the information I needed was scattered throughout the docs.</p> <p> Happy Observing</p> <p> <a href="https://fed.brid.gy/"></a></p></p>
Matt Silbernagelhttps://silbernagel.dev/posts/elixir-fly-and-grafana-cloudElixir, Fly, and GrafanaCloud2023-05-01T00:50:47.077634Z<p>
I recently read about a newer SQLite database tool called Litestream which creates backups of the database in S3 compatible storage after every transaction. Litestream restores from that backup, so when scaling horizontally to a new server, the latest version of the DB will be available. Fly.io seems like the ideal platform and Elixir the perfect language for this solution.</p> <ul> <li>
<a href="https://litestream.io/">Check out Litestream</a> </li>
<li>
<a href="https://fly.io">Check out Fly.io</a> </li> </ul> <h2> Why</h2> <ul> <li>
fast data access - the SQLite DB resides on the same server as the app and, with Fly, the app can live close to the user </li>
<li>
simple local development - with SQLite, you don’t need to install DB servers locally </li>
<li>
low maintenance - SQLite doesn’t require much maintenance </li>
<li>
highly distributed - Elixir makes it all possible via it’s distribution capabilities </li> </ul> <h2> Getting started</h2> <p> <a href="https://github.com/silbermm/distributed_sqlite">See the companion repository for reference</a></p> <p> Create a new phoenix app that uses SQLite as the database:</p>
<pre><code class="bash">$ mix phx.new distributed_sqlite --database sqlite3</code></pre> <p> Launch it as a new fly app:</p>
<pre><code class="bash">$ fly launch</code></pre> <ul> <li>
type in an app name (or just take the default) </li>
<li>
choose any region you like </li>
<li>
choose ‘N’ when asked if you want a Postgres database </li>
<li>
choose ‘N’ when asked if you want a Redis instance </li>
<li>
choose ‘N’ to deploy now </li> </ul> <p> An environment variable, <code class="inline">DATABASE_PATH</code>, is needed to indicate which file to use for the SQLite database. Open the <code class="inline">fly.toml</code> file and add <code class="inline">DATABASE_PATH = /app/distributed_sqlite.db</code> (use any database name you want here) under the <code class="inline">[env]</code> section and try to deploy.</p>
<pre><code class="bash">$ flyctl deploy</code></pre> <p> Success. A Phoenix app running on Fly using SQLite. Now, to see it in action.</p> <h2> Counter data model</h2> <p> Lets build a simple, naive counter that just counts the views of each page.</p> <pre><code class="bash">$ mix phx.gen.schema Counter.PageCount page_counts page:string count:integer $ mix ecto.migrate</code></pre> <p> Add a <code class="inline">Counter</code> module which can add page view counts</p> <pre><code class="makeup elixir"><span class="c1"># lib/distributed_sqlite/counter.ex</span><span class="w"> </span><span class="kd">defmodule</span><span class="w"> </span><span class="nc">DistributedSqlite.Counter</span><span class="w"> </span><span class="k" data-group-id="8873056511-1">do</span><span class="w">
</span><span class="kn">alias</span><span class="w"> </span><span class="nc">DistributedSqlite.Counter.PageCount</span><span class="w">
</span><span class="kn">alias</span><span class="w"> </span><span class="nc">DistributedSqlite.Repo</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">count_page_view</span><span class="p" data-group-id="8873056511-2">(</span><span class="n">page_name</span><span class="p" data-group-id="8873056511-2">)</span><span class="w"> </span><span class="k" data-group-id="8873056511-3">do</span><span class="w">
</span><span class="n">page_count</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">get_by</span><span class="p" data-group-id="8873056511-4">(</span><span class="nc">PageCount</span><span class="p">,</span><span class="w"> </span><span class="ss">page</span><span class="p">:</span><span class="w"> </span><span class="n">page_name</span><span class="p" data-group-id="8873056511-4">)</span><span class="w">
</span><span class="k">case</span><span class="w"> </span><span class="n">page_count</span><span class="w"> </span><span class="k" data-group-id="8873056511-5">do</span><span class="w">
</span><span class="no">nil</span><span class="w"> </span><span class="o">-></span><span class="w">
</span><span class="p" data-group-id="8873056511-6">%</span><span class="nc" data-group-id="8873056511-6">PageCount</span><span class="p" data-group-id="8873056511-6">{</span><span class="p" data-group-id="8873056511-6">}</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">PageCount</span><span class="o">.</span><span class="n">changeset</span><span class="p" data-group-id="8873056511-7">(</span><span class="p" data-group-id="8873056511-8">%{</span><span class="ss">count</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="ss">page</span><span class="p">:</span><span class="w"> </span><span class="n">page_name</span><span class="p" data-group-id="8873056511-8">}</span><span class="p" data-group-id="8873056511-7">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">insert</span><span class="p" data-group-id="8873056511-9">(</span><span class="p" data-group-id="8873056511-9">)</span><span class="w">
</span><span class="p" data-group-id="8873056511-10">%</span><span class="nc" data-group-id="8873056511-10">PageCount</span><span class="p" data-group-id="8873056511-10">{</span><span class="p" data-group-id="8873056511-10">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">page_count</span><span class="w"> </span><span class="o">-></span><span class="w">
</span><span class="n">page_count</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">PageCount</span><span class="o">.</span><span class="n">changeset</span><span class="p" data-group-id="8873056511-11">(</span><span class="p" data-group-id="8873056511-12">%{</span><span class="ss">count</span><span class="p">:</span><span class="w"> </span><span class="n">page_count</span><span class="o">.</span><span class="n">count</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="mi">1</span><span class="p" data-group-id="8873056511-12">}</span><span class="p" data-group-id="8873056511-11">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">update</span><span class="p" data-group-id="8873056511-13">(</span><span class="p" data-group-id="8873056511-13">)</span><span class="w">
</span><span class="k" data-group-id="8873056511-5">end</span><span class="w">
</span><span class="k" data-group-id="8873056511-3">end</span><span class="w">
</span><span class="k" data-group-id="8873056511-1">end</span></code></pre> <p> Update the <code class="inline">page_controller</code> to count views</p> <pre><code class="makeup elixir"><span class="w"> </span><span class="c1"># lib/distributed_sqlite_web/controllers/page_controller.ex</span><span class="w"> </span><span class="kn">alias</span><span class="w"> </span><span class="nc">DistributedSqlite.Counter</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">index</span><span class="p" data-group-id="8725908237-1">(</span><span class="n">conn</span><span class="p">,</span><span class="w"> </span><span class="c">_params</span><span class="p" data-group-id="8725908237-1">)</span><span class="w"> </span><span class="k" data-group-id="8725908237-2">do</span><span class="w">
</span><span class="n">page_view</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nc">Counter</span><span class="o">.</span><span class="n">count_page_view</span><span class="p" data-group-id="8725908237-3">(</span><span class="s">"home"</span><span class="p" data-group-id="8725908237-3">)</span><span class="w">
</span><span class="n">render</span><span class="p" data-group-id="8725908237-4">(</span><span class="n">conn</span><span class="p">,</span><span class="w"> </span><span class="s">"index.html"</span><span class="p">,</span><span class="w"> </span><span class="ss">page_count</span><span class="p">:</span><span class="w"> </span><span class="n">page_view</span><span class="o">.</span><span class="n">count</span><span class="p" data-group-id="8725908237-4">)</span><span class="w">
</span><span class="k" data-group-id="8725908237-2">end</span></code></pre> <p> Display the counter on the page</p> <pre><code class="makeup eex"><span class=""><!-- lib/distributed_sqlite_web/templates/page/index.html.heex --> <h1> Page Views </span><span class="p" data-group-id="1708810971-1"><%=</span><span class="w"> </span><span class="na">@view_count</span><span class="w"> </span><span class="p" data-group-id="1708810971-1">%></span><span class=""> </h1></span></code></pre> <p> Now deploy again using <code class="inline">flyctl deploy</code> and then browse to your site to verify that the count shows and updates when refreshing.</p> <h2> Restoring the DB on deploy</h2> <p> The next problem to deal with is that the database will be wiped on our next deploy since it’s using ephemeral storage.</p> <p> One way to resolve this is to use a persistent volume (which should be done for production apps). But since this post is all about Litestream, so lets set that up and see how can help with this.</p> <p> The first step is to create a bucket in some S3 compatible storage. I like to use Digital Ocean Spaces for this, but you can also use AWS if you want.</p> <p> <a href="https://litestream.io/guides/">See Litestream docs for more options</a></p> <p> <a href="https://www.digitalocean.com/products/spaces">Setup a Digital Ocean space</a></p> <p> When you have your storage setup, you’ll need 3 pieces of information</p> <ol> <li>
the bucket URL </li>
<li>
the access key </li>
<li>
the secret key </li> </ol> <p> A configuration file, <code class="inline">litestream.yml</code>, is needed for Litestream to function. Create one in the root of the project with the following text (remember to replace the path with YOUR path):</p> <pre><code class="yaml">access-key-id: ${LITESTREAM_ACCESS_KEY_ID} secret-access-key: ${LITESTREAM_SECRET_ACCESS_KEY}
dbs:
- path: /app/distributed_sql.db
replicas:
- url: ${REPLICA_URL}</code></pre> <p> Now set the three variables in Fly to the values you recorded from when setting up the bucket.</p>
<pre><code>$ flyctl secrets set REPLICA_URL=... LITESTREAM_ACCESS_KEY_ID=... LITESTREAM_SECRET_ACCESS_KEY=...</code></pre> <p> Next, add Litestream to our Docker image. Add the following lines to <code class="inline">Dockerfile</code> as part of the <code class="inline">builder</code> phase:</p> <pre><code>ADD https://github.com/benbjohnson/litestream/releases/download/v0.3.9/litestream-v0.3.9-linux-amd64-static.tar.gz /tmp/litestream.tar.gz RUN tar -C /usr/local/bin -xzf /tmp/litestream.tar.gz</code></pre> <p> And in the <code class="inline">runner</code> phase add:</p> <pre><code>COPY --from=builder /usr/local/bin/litestream /usr/local/bin/litestream COPY litestream.yml /etc/litestream.yml</code></pre> <p> Update the starting script so that the elixir release is a sub-process of litestream. The easiest way I’ve found to do this is to create a run script called <code class="inline">run.sh</code> with the following content:</p> <pre><code>#!/bin/bash set -e
# Restore the database if it does not already exist.
if [ -f /app/distributed_sql.db ]; then
echo "Database already exists, skipping restore"
else
echo "No database found, restoring from replica if exists"
litestream restore -v -if-replica-exists -o /app/distributed_sql.db "${REPLICA_URL}"
fi
# Run migrations
/app/bin/migrate
# Run litestream with your app as the subprocess.
exec litestream replicate -exec "/app/bin/server"</code></pre> <blockquote> <p>
Be sure to remove the migration script from fly.toml since it runs in the run.sh script now. </p> </blockquote> <p> Now update the <code class="inline">Dockerfile</code> to use this new script to start the app:</p> <pre><code>COPY run.sh /scripts/run.sh RUN chmod 755 /scripts/run.sh
CMD ["/scripts/run.sh"]</code></pre> <p> Deploying should now start using Litestream to restore the database on deploys and push backups when data changes. You can verify in the monitoring interface of fly. Look for something similar to the image below:</p> <p> <a href="/images/fly-logs.png">Fly logs showing that Litestream is running</a></p> <h2> Distributing</h2> <p> With all of this in place, things would work great when running one instance of you app. But as soon as you add another node things get out of whack. Lets see this in action. Scale the app to 2 and see what happens to the data.</p>
<pre><code>$ flyctl scale count 2</code></pre> <p> Enough refreshing the browser or opening in different windows/tabs and you’ll start to see discrepancies in the view count. This is because we are not replicating the data between the instances. This is a problem Elixir is built for.</p> <h3> Setup the Cluster</h3> <p> <a href="https://fly.io/docs/elixir/getting-started/clustering/">Follow the Fly guide to get clustering working</a></p> <p> Once clustered, we can begin to replicate our database calls. Add a new GenServer with the following content:</p> <pre><code class="makeup elixir"><span class="c1"># /lib/distributed_sqlite/repo_replication.ex</span><span class="w"> </span><span class="kd">defmodule</span><span class="w"> </span><span class="nc">DistributedSqlite.RepoReplication</span><span class="w"> </span><span class="k" data-group-id="9869969934-1">do</span><span class="w">
</span><span class="na">@moduledoc</span><span class="w"> </span><span class="s">"""
Run on each node to handle replicating Repo writes
"""</span><span class="w">
</span><span class="kn">use</span><span class="w"> </span><span class="nc">GenServer</span><span class="w">
</span><span class="kn">alias</span><span class="w"> </span><span class="nc">DistributedSqlite.Repo</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">start_link</span><span class="p" data-group-id="9869969934-2">(</span><span class="n">args</span><span class="p" data-group-id="9869969934-2">)</span><span class="w"> </span><span class="k" data-group-id="9869969934-3">do</span><span class="w">
</span><span class="nc">GenServer</span><span class="o">.</span><span class="n">start_link</span><span class="p" data-group-id="9869969934-4">(</span><span class="bp">__MODULE__</span><span class="p">,</span><span class="w"> </span><span class="n">args</span><span class="p">,</span><span class="w"> </span><span class="ss">name</span><span class="p">:</span><span class="w"> </span><span class="bp">__MODULE__</span><span class="p" data-group-id="9869969934-4">)</span><span class="w">
</span><span class="k" data-group-id="9869969934-3">end</span><span class="w">
</span><span class="na">@impl</span><span class="w"> </span><span class="no">true</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">init</span><span class="p" data-group-id="9869969934-5">(</span><span class="c">_args</span><span class="p" data-group-id="9869969934-5">)</span><span class="w"> </span><span class="k" data-group-id="9869969934-6">do</span><span class="w">
</span><span class="p" data-group-id="9869969934-7">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="p" data-group-id="9869969934-8">[</span><span class="p" data-group-id="9869969934-8">]</span><span class="p" data-group-id="9869969934-7">}</span><span class="w">
</span><span class="k" data-group-id="9869969934-6">end</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">handle_cast</span><span class="p" data-group-id="9869969934-9">(</span><span class="p" data-group-id="9869969934-10">{</span><span class="ss">:replicate</span><span class="p">,</span><span class="w"> </span><span class="n">query</span><span class="p">,</span><span class="w"> </span><span class="ss">:insert</span><span class="p" data-group-id="9869969934-10">}</span><span class="p">,</span><span class="w"> </span><span class="n">state</span><span class="p" data-group-id="9869969934-9">)</span><span class="w"> </span><span class="k" data-group-id="9869969934-11">do</span><span class="w">
</span><span class="nc">Repo</span><span class="o">.</span><span class="n">insert!</span><span class="p" data-group-id="9869969934-12">(</span><span class="n">query</span><span class="p" data-group-id="9869969934-12">)</span><span class="w">
</span><span class="p" data-group-id="9869969934-13">{</span><span class="ss">:noreply</span><span class="p">,</span><span class="w"> </span><span class="n">state</span><span class="p" data-group-id="9869969934-13">}</span><span class="w">
</span><span class="k" data-group-id="9869969934-11">end</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">handle_cast</span><span class="p" data-group-id="9869969934-14">(</span><span class="p" data-group-id="9869969934-15">{</span><span class="ss">:replicate</span><span class="p">,</span><span class="w"> </span><span class="n">changeset</span><span class="p">,</span><span class="w"> </span><span class="ss">:update</span><span class="p" data-group-id="9869969934-15">}</span><span class="p">,</span><span class="w"> </span><span class="n">state</span><span class="p" data-group-id="9869969934-14">)</span><span class="w"> </span><span class="k" data-group-id="9869969934-16">do</span><span class="w">
</span><span class="nc">Repo</span><span class="o">.</span><span class="n">update!</span><span class="p" data-group-id="9869969934-17">(</span><span class="n">changeset</span><span class="p" data-group-id="9869969934-17">)</span><span class="w">
</span><span class="p" data-group-id="9869969934-18">{</span><span class="ss">:noreply</span><span class="p">,</span><span class="w"> </span><span class="n">state</span><span class="p" data-group-id="9869969934-18">}</span><span class="w">
</span><span class="k" data-group-id="9869969934-16">end</span><span class="w">
</span><span class="k" data-group-id="9869969934-1">end</span></code></pre> <p> Make sure to start it in the <code class="inline">application.ex</code> </p> <pre><code class="makeup elixir"><span class="c1"># lib/distributed_sqlite/application.ex</span><span class="w"> </span><span class="n">children</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p" data-group-id="7966870491-1">[</span><span class="w">
</span><span class="n">...</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="7966870491-2">{</span><span class="nc">DistributedSqlite.RepoReplication</span><span class="p">,</span><span class="w"> </span><span class="p" data-group-id="7966870491-3">[</span><span class="p" data-group-id="7966870491-3">]</span><span class="p" data-group-id="7966870491-2">}</span><span class="w">
</span><span class="p" data-group-id="7966870491-1">]</span></code></pre> <p> Open the <code class="inline">DistributedSqlite.Repo</code> module and add a <code class="inline">replicate/2</code> function</p> <pre><code class="makeup elixir"><span class="na">@doc</span><span class="w"> </span><span class="s">""" Replicate the query on the the other nodes in the cluster
"""</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">replicate</span><span class="p" data-group-id="7884079325-1">(</span><span class="p" data-group-id="7884079325-2">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">data_to_replicate</span><span class="p" data-group-id="7884079325-2">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">ret</span><span class="p">,</span><span class="w"> </span><span class="n">operation</span><span class="p" data-group-id="7884079325-1">)</span><span class="w"> </span><span class="ow">when</span><span class="w"> </span><span class="n">operation</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="p" data-group-id="7884079325-3">[</span><span class="ss">:insert</span><span class="p">,</span><span class="w"> </span><span class="ss">:update</span><span class="p" data-group-id="7884079325-3">]</span><span class="w"> </span><span class="k" data-group-id="7884079325-4">do</span><span class="w">
</span><span class="bp">_</span><span class="w"> </span><span class="o">=</span><span class="w">
</span><span class="k">for</span><span class="w"> </span><span class="n">node</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nc">Node</span><span class="o">.</span><span class="n">list</span><span class="p" data-group-id="7884079325-5">(</span><span class="p" data-group-id="7884079325-5">)</span><span class="w"> </span><span class="k" data-group-id="7884079325-6">do</span><span class="w">
</span><span class="nc">GenServer</span><span class="o">.</span><span class="n">cast</span><span class="p" data-group-id="7884079325-7">(</span><span class="w">
</span><span class="p" data-group-id="7884079325-8">{</span><span class="nc">DistributedSqlite.RepoReplication</span><span class="p">,</span><span class="w"> </span><span class="n">node</span><span class="p" data-group-id="7884079325-8">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="7884079325-9">{</span><span class="ss">:replicate</span><span class="p">,</span><span class="w"> </span><span class="n">data_to_replicate</span><span class="p">,</span><span class="w"> </span><span class="n">operation</span><span class="p" data-group-id="7884079325-9">}</span><span class="w">
</span><span class="p" data-group-id="7884079325-7">)</span><span class="w">
</span><span class="k" data-group-id="7884079325-6">end</span><span class="w">
</span><span class="n">ret</span><span class="w">
</span><span class="k" data-group-id="7884079325-4">end</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">replicate</span><span class="p" data-group-id="7884079325-10">(</span><span class="p" data-group-id="7884079325-11">{</span><span class="ss">:error</span><span class="p">,</span><span class="w"> </span><span class="c">_changeset</span><span class="p" data-group-id="7884079325-11">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">ret</span><span class="p">,</span><span class="w"> </span><span class="bp">_</span><span class="p" data-group-id="7884079325-10">)</span><span class="p">,</span><span class="w"> </span><span class="ss">do</span><span class="p">:</span><span class="w"> </span><span class="n">ret</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">replicate</span><span class="p" data-group-id="7884079325-12">(</span><span class="p" data-group-id="7884079325-13">%</span><span class="nc" data-group-id="7884079325-13">Ecto.Changeset</span><span class="p" data-group-id="7884079325-13">{</span><span class="p" data-group-id="7884079325-13">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">changeset</span><span class="p">,</span><span class="w"> </span><span class="n">operation</span><span class="p" data-group-id="7884079325-12">)</span><span class="w"> </span><span class="ow">when</span><span class="w"> </span><span class="n">operation</span><span class="w"> </span><span class="ow">in</span><span class="w"> </span><span class="p" data-group-id="7884079325-14">[</span><span class="ss">:insert</span><span class="p">,</span><span class="w"> </span><span class="ss">:update</span><span class="p" data-group-id="7884079325-14">]</span><span class="w"> </span><span class="k" data-group-id="7884079325-15">do</span><span class="w">
</span><span class="bp">_</span><span class="w"> </span><span class="o">=</span><span class="w">
</span><span class="k">for</span><span class="w"> </span><span class="n">node</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nc">Node</span><span class="o">.</span><span class="n">list</span><span class="p" data-group-id="7884079325-16">(</span><span class="p" data-group-id="7884079325-16">)</span><span class="w"> </span><span class="k" data-group-id="7884079325-17">do</span><span class="w">
</span><span class="nc">GenServer</span><span class="o">.</span><span class="n">cast</span><span class="p" data-group-id="7884079325-18">(</span><span class="w">
</span><span class="p" data-group-id="7884079325-19">{</span><span class="nc">DistributedSqlite.RepoReplication</span><span class="p">,</span><span class="w"> </span><span class="n">node</span><span class="p" data-group-id="7884079325-19">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="7884079325-20">{</span><span class="ss">:replicate</span><span class="p">,</span><span class="w"> </span><span class="n">changeset</span><span class="p">,</span><span class="w"> </span><span class="n">operation</span><span class="p" data-group-id="7884079325-20">}</span><span class="w">
</span><span class="p" data-group-id="7884079325-18">)</span><span class="w">
</span><span class="k" data-group-id="7884079325-17">end</span><span class="w">
</span><span class="p" data-group-id="7884079325-21">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">changeset</span><span class="p" data-group-id="7884079325-21">}</span><span class="w">
</span><span class="k" data-group-id="7884079325-15">end</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">replicate</span><span class="p" data-group-id="7884079325-22">(</span><span class="n">schema</span><span class="p">,</span><span class="w"> </span><span class="ss">:insert</span><span class="p" data-group-id="7884079325-22">)</span><span class="w"> </span><span class="k" data-group-id="7884079325-23">do</span><span class="w">
</span><span class="bp">_</span><span class="w"> </span><span class="o">=</span><span class="w">
</span><span class="k">for</span><span class="w"> </span><span class="n">node</span><span class="w"> </span><span class="o"><-</span><span class="w"> </span><span class="nc">Node</span><span class="o">.</span><span class="n">list</span><span class="p" data-group-id="7884079325-24">(</span><span class="p" data-group-id="7884079325-24">)</span><span class="w"> </span><span class="k" data-group-id="7884079325-25">do</span><span class="w">
</span><span class="nc">GenServer</span><span class="o">.</span><span class="n">cast</span><span class="p" data-group-id="7884079325-26">(</span><span class="w">
</span><span class="p" data-group-id="7884079325-27">{</span><span class="nc">DistributedSqlite.RepoReplication</span><span class="p">,</span><span class="w"> </span><span class="n">node</span><span class="p" data-group-id="7884079325-27">}</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="7884079325-28">{</span><span class="ss">:replicate</span><span class="p">,</span><span class="w"> </span><span class="n">schema</span><span class="p">,</span><span class="w"> </span><span class="ss">:insert</span><span class="p" data-group-id="7884079325-28">}</span><span class="w">
</span><span class="p" data-group-id="7884079325-26">)</span><span class="w">
</span><span class="k" data-group-id="7884079325-25">end</span><span class="w">
</span><span class="p" data-group-id="7884079325-29">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">schema</span><span class="p" data-group-id="7884079325-29">}</span><span class="w">
</span><span class="k" data-group-id="7884079325-23">end</span></code></pre> <p> This function can be piped into from an <code class="inline">Repo.insert</code> or the result of a <code class="inline">Repo.update</code>. Try this in the <code class="inline">DistributedSqlite.Counter</code> module:</p> <pre><code class="makeup elixir"><span class="k">case</span><span class="w"> </span><span class="n">page_count</span><span class="w"> </span><span class="k" data-group-id="6416662931-1">do</span><span class="w"> </span><span class="no">nil</span><span class="w"> </span><span class="o">-></span><span class="w">
</span><span class="p" data-group-id="6416662931-2">%</span><span class="nc" data-group-id="6416662931-2">PageCount</span><span class="p" data-group-id="6416662931-2">{</span><span class="p" data-group-id="6416662931-2">}</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">PageCount</span><span class="o">.</span><span class="n">changeset</span><span class="p" data-group-id="6416662931-3">(</span><span class="p" data-group-id="6416662931-4">%{</span><span class="ss">count</span><span class="p">:</span><span class="w"> </span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="ss">page</span><span class="p">:</span><span class="w"> </span><span class="n">page_name</span><span class="p" data-group-id="6416662931-4">}</span><span class="p" data-group-id="6416662931-3">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">insert</span><span class="p" data-group-id="6416662931-5">(</span><span class="p" data-group-id="6416662931-5">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">replicate</span><span class="p" data-group-id="6416662931-6">(</span><span class="ss">:insert</span><span class="p" data-group-id="6416662931-6">)</span><span class="w">
</span><span class="p" data-group-id="6416662931-7">%</span><span class="nc" data-group-id="6416662931-7">PageCount</span><span class="p" data-group-id="6416662931-7">{</span><span class="p" data-group-id="6416662931-7">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">page_count</span><span class="w"> </span><span class="o">-></span><span class="w">
</span><span class="n">page_count</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">PageCount</span><span class="o">.</span><span class="n">changeset</span><span class="p" data-group-id="6416662931-8">(</span><span class="p" data-group-id="6416662931-9">%{</span><span class="ss">count</span><span class="p">:</span><span class="w"> </span><span class="n">page_count</span><span class="o">.</span><span class="n">count</span><span class="w"> </span><span class="o">+</span><span class="w"> </span><span class="mi">1</span><span class="p" data-group-id="6416662931-9">}</span><span class="p" data-group-id="6416662931-8">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">update</span><span class="p" data-group-id="6416662931-10">(</span><span class="p" data-group-id="6416662931-10">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="k">case</span><span class="w"> </span><span class="k" data-group-id="6416662931-11">do</span><span class="w">
</span><span class="p" data-group-id="6416662931-12">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">cnt</span><span class="p" data-group-id="6416662931-12">}</span><span class="w"> </span><span class="o">-></span><span class="w">
</span><span class="n">cnt</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">PageCount</span><span class="o">.</span><span class="n">replicate_changeset</span><span class="p" data-group-id="6416662931-13">(</span><span class="p" data-group-id="6416662931-13">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="nc">Repo</span><span class="o">.</span><span class="n">replicate</span><span class="p" data-group-id="6416662931-14">(</span><span class="ss">:update</span><span class="p" data-group-id="6416662931-14">)</span><span class="w">
</span><span class="p" data-group-id="6416662931-15">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">cnt</span><span class="p" data-group-id="6416662931-15">}</span><span class="w">
</span><span class="k" data-group-id="6416662931-11">end</span><span class="w">
</span><span class="k" data-group-id="6416662931-1">end</span></code></pre> <p> With this in place, deploy again. Your data is now consistent no matter which node serves the traffic.</p> <h2> Wrap up</h2> <p> I’m not sure how far this can be pushed and there are downsides to this approach, but I plan on continuing this journey that puts my data as close to the app as possible.</p> <p> There is another worth exploring that removes the need to replicate the data on the application side. It is still in beta, but I plan on trying it out soon as well.</p> <p> <a href="https://fly.io/docs/litefs/">Another approach - Litefs</a></p> <p> <a href="https://fed.brid.gy/"></a></p></p>
Matt Silbernagelhttps://silbernagel.dev/posts/distributed-sqlite-with-elixirDistributed SQLite with Elixir2023-01-08T00:50:47.077707Z<h2>
Update 10-15-2023</h2> <blockquote> <p>
This is no longer an issue since Phoenix started bundling Tailwind. </p> </blockquote> <h2> The problem</h2> <p> I was having issues with tailwind building correctly only in production and only when built in a Docker container.</p> <p> I followed an article I found online to get postcss and tailwind configured correctly so that it only builds the classes required by the application (we don’t want a huge css file).</p> <p> <a href="https://s2g.io/using-tailwindcss-with-phoenix">Using Tailwind with Phoenix</a></p> <p> My tailwind.config.js looked like this:</p> <pre><code class="javascript">module.exports = { purge: [
'../lib/**/*.ex',
'../lib/**/*.leex',
'../lib/**/*.eex',
'./js/**/*.js'
],
enabled: process.env.NODE_ENV === 'production',
mode: 'jit',
darkMode: false, // or 'media' or 'class'
theme: {
extend: {
container: (theme) => ({
center: true,
padding: theme("spacing.4"),
screens: {
sm: "100%",
md: "100%",
lg: "1024px",
xl: "1280px"
}
})
},
},
variants: {
extend: {},
},
plugins: [
require('@tailwindcss/typography'),
],
}</code></pre> <p> Here we are telling tailwind to search through our <code class="inline">.ex</code>, <code class="inline">.leex</code>, <code class="inline">.eex</code> and <code class="inline">.js</code> files for relevent classes and purge anything not used.</p> <p> This all worked perfect in development and local testing.</p> <p> I add a Dockerfile to use as my production deployment stategy. I started with the version of Phoenix documented Dockerfile and tweaked it a little to use the latest version of Elixir. Here is what I ended up with that was not compiling tailwind correctly:</p> <p> <a href="https://hexdocs.pm/phoenix/releases.html#containers">Phoenix documented Dockerfile</a></p> <pre><code class="Dockerfile">FROM elixir:1.12.1-alpine AS build
RUN apk add --no-cache build-base npm git python3
WORKDIR /app
RUN mix local.hex --force && \
mix local.rebar --force
ENV MIX_ENV=prod
COPY mix.exs mix.lock ./
COPY config config
RUN mix do deps.get, deps.compile
# build assets
COPY assets/package.json assets/package-lock.json ./assets/
RUN npm --prefix ./assets ci --progress=false --no-audit --loglevel=error
COPY priv priv
COPY assets assets
RUN npm run --prefix ./assets deploy
RUN mix phx.digest
COPY lib lib
# uncomment COPY if rel/ exists
# COPY rel rel
RUN mix do compile, release
# prepare release image
FROM alpine:3.13 AS app
RUN apk add --no-cache openssl ncurses-libs libstdc++
WORKDIR /app
RUN chown nobody:nobody /app
USER nobody:nobody
COPY --from=build --chown=nobody:nobody /app/_build/prod/rel/my_app ./
ENV HOME=/app
CMD trap 'exit' INT; ./bin/my_app eval "MyApp.Release.migrate()" && ./bin/my_app start</code></pre> <h2> The solution</h2> <p> I tried many different things to fix the issue. I tried installing a specific version of node in the image - building a development build with webpack - but what it came down to was an order of operations bug.</p> <p> Remember the tailwind config from above? It says to purge any tailwind classes that haven’t been used in our template files. Well, our image doesn’t have any template files at the time of running <code class="inline">npm run --prefix ./assets deploy</code> because we haven’t copied our <code class="inline">lib</code> folder over to the image yet!</p> <p> So the simple fix was to move <code class="inline">COPY lib lib</code> up in the Dockerfile before running <code class="inline">npm deploy</code>.</p></h2>
Matt Silbernagelhttps://silbernagel.dev/posts/phoenix-tailwind-dockerPhoenix + Tailwind + Docker issues2021-06-21T00:50:47.077732Z<ul>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-1">Part 1 - using Terraform to describe and build the infrastructure</a> </li>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-2">Part 2 - building and deploying a docker image to ECS</a> </li></ul>
<p>
In Parts 1 and 2, we built the infrastructure and deployed a very simple Phoenix application. We can scale up our application since it’s behind a load balancer by increasing the number of Tasks in our ECS service. For a lot of use cases, that works just fine. But if we are using Phoenix Presence or anything that requires coordination between the Elixir nodes, we’ll need to build a cluster.</p> <p> In order to do this we’ll need do a several things:</p> <ul> <li>
Update our ECS service to include Service Discovery </li>
<li>
Include libcluster to automatically connect nodes </li>
<li>
Update our Release to include a pre-start script that names our node </li>
<li>
Make sure all nodes have the same COOKIE </li> </ul> <h2> Update the ECS Service</h2> <p> ECS includes Service Discovery that we can setup via Terraform. Add this to our previous Terraform file:</p> <p> <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-discovery.html">ECS Service Discovery Documentation</a> </p> <pre><code class="hcl"> resource "aws_service_discovery_private_dns_namespace" dns_namespace {
name = "${var.app_name}.local"
description = "some desc"
vpc = aws_vpc.main.id
}
resource "aws_service_discovery_service" service_discovery {
name = var.app_name
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.dns_namespace.id
dns_records {
ttl = 10
type = "A"
}
routing_policy = "MULTIVALUE"
}
}
</code></pre> <p> Reference the service discovery in the <code class="inline">service</code> resource:</p> <pre><code class="hcl">resource aws_ecs_service service { name = "${var.app_name}_service"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = "arn:aws:ecs:us-east-1:${data.aws_caller_identity.current.account_id}:task-definition/${aws_ecs_task_definition.task_definition.family}:${var.task_version}"
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.security_group.id]
subnets = data.aws_subnet.default_subnet.*.id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.lb_target_group.arn
container_name = var.app_name
container_port = "4000"
}
service_registries {
registry_arn = aws_service_discovery_service.service_discovery.arn
container_name = var.app_name
}
}</code></pre> <p> This will create a service registry and register our services ip address when it starts up. It uses Route53 to do this by creating a private DNS entry that can be called anything you like. In the above definition, we called it <code class="inline">ecs_app.local</code>. When a new task starts up, it will be registered as an <code class="inline">A</code> record under that DNS namespace.</p> <p> Make sure to run <code class="inline">terraform plan</code> and <code class="inline">terraform apply</code>.</p> <blockquote> <p>
Adding service registries to a ECS service is a destructive action, so don’t be alarmed that it will destroy then recreate your ECS service. </p> </blockquote> <h2> Auto connecting nodes with libcluster</h2> <p> Now that our nodes are registered, we need a way to connect them. To do this, we’ll use libcluster which is a great small library that makes cluster auto formation very easy. It comes with several different strategy’s out-of-the-box including kubernetes, network gossip, using an Erlang hosts file, and the one we’ll use, DNSPoll.</p> <p> [<a href="https://github.com/bitwalker/libcluster](https://github.com/bitwalker/libcluster]">https://github.com/bitwalker/libcluster](https://github.com/bitwalker/libcluster]</a> </p> <p> Lets first add libcluster as a dependency.</p> <pre><code class="makeup elixir"><span class="c1"># mix.exs</span><span class="w"> </span><span class="kd">defmodule</span><span class="w"> </span><span class="nc">EcsApp.MixProject</span><span class="w"> </span><span class="k" data-group-id="4595886194-1">do</span><span class="w">
</span><span class="kn">use</span><span class="w"> </span><span class="nc">Mix.Project</span><span class="w">
</span><span class="c1"># ...</span><span class="w">
</span><span class="kd">defp</span><span class="w"> </span><span class="nf">deps</span><span class="w"> </span><span class="k" data-group-id="4595886194-2">do</span><span class="w">
</span><span class="p" data-group-id="4595886194-3">{</span><span class="ss">:libcluster</span><span class="p">,</span><span class="w"> </span><span class="s">"~> 3.2"</span><span class="p" data-group-id="4595886194-3">}</span><span class="p">,</span><span class="w">
</span><span class="c1"># all your other deps</span><span class="w">
</span><span class="k" data-group-id="4595886194-2">end</span><span class="w">
</span><span class="k" data-group-id="4595886194-1">end</span></code></pre> <p> run <code class="inline">mix deps.get</code></p> <p> Now we need to configure libcluster. I like to do this in the <code class="inline">application.ex</code> file.</p> <pre><code class="makeup elixir"><span class="c1"># lib/ecs_app/application.ex</span><span class="w"> </span><span class="kd">defmodule</span><span class="w"> </span><span class="nc">EcsApp.Application</span><span class="w"> </span><span class="k" data-group-id="7633864806-1">do</span><span class="w">
</span><span class="kn">use</span><span class="w"> </span><span class="nc">Application</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">start</span><span class="p" data-group-id="7633864806-2">(</span><span class="c">_type</span><span class="p">,</span><span class="w"> </span><span class="c">_args</span><span class="p" data-group-id="7633864806-2">)</span><span class="w"> </span><span class="k" data-group-id="7633864806-3">do</span><span class="w">
</span><span class="n">topologies</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p" data-group-id="7633864806-4">[</span><span class="w">
</span><span class="ss">ecs_app</span><span class="p">:</span><span class="w"> </span><span class="p" data-group-id="7633864806-5">[</span><span class="w">
</span><span class="ss">strategy</span><span class="p">:</span><span class="w"> </span><span class="nc">Cluster.Strategy.DNSPoll</span><span class="p">,</span><span class="w">
</span><span class="ss">config</span><span class="p">:</span><span class="w"> </span><span class="p" data-group-id="7633864806-6">[</span><span class="w">
</span><span class="ss">polling_interval</span><span class="p">:</span><span class="w"> </span><span class="mi">1000</span><span class="p">,</span><span class="w">
</span><span class="ss">query</span><span class="p">:</span><span class="w"> </span><span class="s">"ecs_app.ecs_app.local"</span><span class="p">,</span><span class="w">
</span><span class="ss">node_basename</span><span class="p">:</span><span class="w"> </span><span class="s">"ecs_app"</span><span class="w">
</span><span class="p" data-group-id="7633864806-6">]</span><span class="w">
</span><span class="p" data-group-id="7633864806-5">]</span><span class="w">
</span><span class="p" data-group-id="7633864806-4">]</span><span class="w">
</span><span class="n">children</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p" data-group-id="7633864806-7">[</span><span class="w">
</span><span class="p" data-group-id="7633864806-8">{</span><span class="nc">Cluster.Supervisor</span><span class="p">,</span><span class="w"> </span><span class="p" data-group-id="7633864806-9">[</span><span class="n">topologies</span><span class="p">,</span><span class="w"> </span><span class="p" data-group-id="7633864806-10">[</span><span class="ss">name</span><span class="p">:</span><span class="w"> </span><span class="nc">EcsApp.ClusterSupervisor</span><span class="p" data-group-id="7633864806-10">]</span><span class="p" data-group-id="7633864806-9">]</span><span class="p" data-group-id="7633864806-8">}</span><span class="p">,</span><span class="w">
</span><span class="nc">EcsAppWeb.Telemetry</span><span class="p">,</span><span class="w">
</span><span class="p" data-group-id="7633864806-11">{</span><span class="nc">Phoenix.PubSub</span><span class="p">,</span><span class="w"> </span><span class="ss">name</span><span class="p">:</span><span class="w"> </span><span class="nc">EcsApp.PubSub</span><span class="p" data-group-id="7633864806-11">}</span><span class="p">,</span><span class="w">
</span><span class="nc">EcsAppWeb.Endpoint</span><span class="w">
</span><span class="p" data-group-id="7633864806-7">]</span><span class="w">
</span><span class="n">opts</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p" data-group-id="7633864806-12">[</span><span class="ss">strategy</span><span class="p">:</span><span class="w"> </span><span class="ss">:one_for_one</span><span class="p">,</span><span class="w"> </span><span class="ss">name</span><span class="p">:</span><span class="w"> </span><span class="nc">EcsApp.Supervisor</span><span class="p" data-group-id="7633864806-12">]</span><span class="w">
</span><span class="nc">Supervisor</span><span class="o">.</span><span class="n">start_link</span><span class="p" data-group-id="7633864806-13">(</span><span class="n">children</span><span class="p">,</span><span class="w"> </span><span class="n">opts</span><span class="p" data-group-id="7633864806-13">)</span><span class="w">
</span><span class="k" data-group-id="7633864806-3">end</span><span class="w">
</span><span class="k" data-group-id="7633864806-1">end</span></code></pre> <h2> Naming the Nodes</h2> <p> Libcluster assumes that your nodes are named a certain way - app @ ip address - for example <code class="inline">ecs_app@192.168.1.10</code>. In order to do this, we’ll use a release script to set the long name of our node.</p> <p> <a href="https://erlang.org/doc/reference_manual/distributed.html#nodes">Node long name reference</a></p> <p> Start by generating the default templates</p>
<pre><code class="bash">mix release.init</code></pre> <p> This will create a new folder at <code class="inline">rel/</code> with three new files. The one we care about is <code class="inline">env.sh.eex</code>. Make it look like the following:</p> <pre><code class="bash">#!/bin/sh export PUBLIC_HOSTNAME=`curl ${ECS_CONTAINER_METADATA_URI}/task | jq -r ".Containers[0].Networks[0].IPv4Addresses[0]"`
export RELEASE_DISTRIBUTION=name
export RELEASE_NODE=<%= @release.name %>@${PUBLIC_HOSTNAME}</code></pre> <p> Here is whats happening in this file.</p> <ul> <li>
Line 2 - This gets the MetaData for the current Task, parses it using <code class="inline">jq</code> to get the IP Address and sets the variable <code class="inline">PUBLIC_HOSTNAME</code> to that ip address. </li>
<li>
Line 3 - This tells the Release to use the long name format </li>
<li>
Line 4 - Sets the long name of the node to <code class="inline">app_name@ip_address</code> i.e <code class="inline">ecs_app@192.168.1.10</code> </li> </ul> <p> <a href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-metadata-endpoint-v3.html#task-metadata-endpoint-v3-response">Task Metadata Documentation</a></p> <p> This script runs as part of the release, but we still need to tell Docker to include it. We also need to install <code class="inline">jq</code> and <code class="inline">curl</code> in our container.</p> <pre><code class="dockerfile">FROM elixir:1.10.0-alpine AS build
ARG MIX_ENV
ARG SECRET_KEY_BASE
RUN apk add --no-cache build-base git npm python
WORKDIR /app
# install hex + rebar
RUN mix local.hex --force && \
mix local.rebar --force
ENV MIX_ENV=${MIX_ENV}
ENV SECRET_KEY_BASE=${SECRET_KEY_BASE}
RUN echo $SECRET_KEY_BASE
COPY mix.exs mix.lock ./
COPY config config
RUN mix do deps.get, deps.compile
COPY assets/package.json assets/package-lock.json ./assets/
RUN npm --prefix ./assets ci --progress=false --no-audit --loglevel=error
COPY priv priv
COPY assets assets
RUN npm run --prefix ./assets deploy
RUN mix phx.digest
COPY lib lib
COPY rel rel
RUN mix do compile, release
FROM alpine:3.9 AS app
ARG MIX_ENV
ARG SECRET_KEY_BASE
RUN apk add --no-cache openssl ncurses-libs curl jq
WORKDIR /app
RUN chown nobody:nobody /app
USER nobody:nobody
COPY --from=build --chown=nobody:nobody /app/_build/${MIX_ENV}/rel/ecs_app ./
ENV HOME=/app
CMD ["bin/ecs_app", "start"]</code></pre> <h2> Set the cookie</h2> <p> The last thing we need to do is make sure all the nodes have the same cookie. This is required for the nodes to connect.</p> <p> In the AWS ECS console, we can set environment variables and the release will look for one called <code class="inline">RELEASE_COOKIE</code>. Lets set that up.</p> <ul> <li>
Find your current TaskDefinition for your service and choose to <code class="inline">Create a New Revision</code>. </li>
<li>
In the Container Definition settings, click your container name and find the Environment Variables section. <ul>
<li>
In the Key field type <code class="inline">RELEASE_COOKIE</code> and in the value field the result of running <code class="inline">mix phx.gen.secret</code>. </li>
</ul>
</li>
<li>
Click update then scroll down and click Create </li>
<li>
In the Actions dropdown, choose Update Service </li>
<li>
Scroll down and click Skip to Review </li>
<li>
Scroll down and click Update Service </li> </ul> <p> Assuming everything goes well, your new Task Definition will start running.</p> <h2> Finalize</h2> <p> Finally, push up your latest changes and let it deploy. Once deployed, you should be able to increase the number of tasks running, and your nodes should all connect. This is usually easily verified via logging or by turning on the LiveDashboard in production.</p> <p> <a href="https://github.com/silbermm/ecs_example">For an example application, see this github repo</a></p></p>
Matt Silbernagelhttps://silbernagel.dev/posts/deploying-elixir-on-ecs-part-3Deploying Elixir on ECS - Part 32020-11-02T00:50:47.077740Z<ul>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-1">Part 1 - using Terraform to describe and build the infrastructure</a> </li>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-3">Part 3 - using ECS Service Discovery to build a distributed Elixir cluster</a> </li></ul>
<p>
In Part 1 we used terraform to build all of the required ECS infrastructure in AWS. Next we’ll build an image, push it to the image repo and tell ECS to run it.</p> <h2> A simple project</h2> <p> Start by building a simple Phoenix app or feel free to use an existing app that you want to deploy to ECS.</p>
<pre><code class="bash">$ mix phx.new ecs_app --no-ecto --live</code></pre> <p> Add a health controller that has a single endpoint that the ALB will use to determine the health of the service. Make a new file at <code class="inline">lib/ecs_app_web/controllers/health_controller.ex</code> and add the following content:</p> <pre><code class="makeup elixir"><span class="kd">defmodule</span><span class="w"> </span><span class="nc">EcsAppWeb.HealthController</span><span class="w"> </span><span class="k" data-group-id="8067871974-1">do</span><span class="w"> </span><span class="kn">use</span><span class="w"> </span><span class="nc">EcsAppWeb</span><span class="p">,</span><span class="w"> </span><span class="ss">:controller</span><span class="w">
</span><span class="kd">def</span><span class="w"> </span><span class="nf">index</span><span class="p" data-group-id="8067871974-2">(</span><span class="n">conn</span><span class="p">,</span><span class="w"> </span><span class="c">_params</span><span class="p" data-group-id="8067871974-2">)</span><span class="w"> </span><span class="k" data-group-id="8067871974-3">do</span><span class="w">
</span><span class="p" data-group-id="8067871974-4">{</span><span class="ss">:ok</span><span class="p">,</span><span class="w"> </span><span class="n">vsn</span><span class="p" data-group-id="8067871974-4">}</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="nc">:application</span><span class="o">.</span><span class="n">get_key</span><span class="p" data-group-id="8067871974-5">(</span><span class="ss">:ecs_app</span><span class="p">,</span><span class="w"> </span><span class="ss">:vsn</span><span class="p" data-group-id="8067871974-5">)</span><span class="w">
</span><span class="n">conn</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="n">put_status</span><span class="p" data-group-id="8067871974-6">(</span><span class="mi">200</span><span class="p" data-group-id="8067871974-6">)</span><span class="w">
</span><span class="o">|></span><span class="w"> </span><span class="n">json</span><span class="p" data-group-id="8067871974-7">(</span><span class="p" data-group-id="8067871974-8">%{</span><span class="ss">healhy</span><span class="p">:</span><span class="w"> </span><span class="no">true</span><span class="p">,</span><span class="w"> </span><span class="ss">version</span><span class="p">:</span><span class="w"> </span><span class="nc">List</span><span class="o">.</span><span class="n">to_string</span><span class="p" data-group-id="8067871974-9">(</span><span class="n">vsn</span><span class="p" data-group-id="8067871974-9">)</span><span class="p">,</span><span class="w"> </span><span class="ss">node_name</span><span class="p">:</span><span class="w"> </span><span class="n">node</span><span class="p" data-group-id="8067871974-10">(</span><span class="p" data-group-id="8067871974-10">)</span><span class="p" data-group-id="8067871974-8">}</span><span class="p" data-group-id="8067871974-7">)</span><span class="w">
</span><span class="k" data-group-id="8067871974-3">end</span><span class="w">
</span><span class="k" data-group-id="8067871974-1">end</span></code></pre> <p> and in <code class="inline">lib/ecs_app_web/router.ex</code></p> <pre><code class="makeup elixir"><span class="n">scope</span><span class="w"> </span><span class="s">"/"</span><span class="p">,</span><span class="w"> </span><span class="nc">EcsAppWeb</span><span class="w"> </span><span class="k" data-group-id="6362576783-1">do</span><span class="w"> </span><span class="n">get</span><span class="w"> </span><span class="s">"/health"</span><span class="p">,</span><span class="w"> </span><span class="nc">HealthController</span><span class="p">,</span><span class="w"> </span><span class="ss">:index</span><span class="w">
</span><span class="k" data-group-id="6362576783-1">end</span><span class="w">
</span></code></pre> <blockquote> <p>
This is a pattern I add to a lot of my web services so I can verify the version that’s deployed and the node name. </p> </blockquote> <h2> Configuration</h2> <p> There’s a few things we’ll need to update in the default phoenix configuration.</p> <p> First update the <code class="inline">prod.exs</code> by changing the host to your load balancer url. This was one of the terraform outputs when we built the infrastructure, or it can also be found in the AWS web console:</p> <pre><code class="makeup elixir"><span class="n">config</span><span class="w"> </span><span class="ss">:ecs_app</span><span class="p">,</span><span class="w"> </span><span class="nc">EcsAppWeb.Endpoint</span><span class="p">,</span><span class="w"> </span><span class="ss">url</span><span class="p">:</span><span class="w"> </span><span class="p" data-group-id="3495938348-1">[</span><span class="ss">host</span><span class="p">:</span><span class="w"> </span><span class="s">"your-lb.us-east-1.elb.amazonaws.com"</span><span class="p">,</span><span class="w"> </span><span class="ss">port</span><span class="p">:</span><span class="w"> </span><span class="mi">4000</span><span class="p" data-group-id="3495938348-1">]</span><span class="p">,</span><span class="w">
</span><span class="ss">cache_static_manifest</span><span class="p">:</span><span class="w"> </span><span class="s">"priv/static/cache_manifest.json"</span></code></pre> <p> This will ensure live view works correctly.</p> <p> Secondly, make sure you uncomment the following line in <code class="inline">config/prod.secret.exs</code></p>
<pre><code class="makeup elixir"><span class="n">config</span><span class="w"> </span><span class="ss">:ecs_app</span><span class="p">,</span><span class="w"> </span><span class="nc">EcsAppWeb.Endpoint</span><span class="p">,</span><span class="w"> </span><span class="ss">server</span><span class="p">:</span><span class="w"> </span><span class="no">true</span></code></pre> <p> This will ensure the endpoint starts up when running a release.</p> <h2> Dockerfile</h2> <p> The Dockerfile is rather simple and taken almost directly from the Phoenix docs.</p> <p> <a href="https://hexdocs.pm/phoenix/releases.html#content">Phoenix Dockerfile Documentation</a></p> <p> Create the file <code class="inline">Dockerfile</code> and add the following:</p> <pre><code class="docker">FROM elixir:1.10.0-alpine AS build
ARG MIX_ENV
ARG SECRET_KEY_BASE
RUN apk add --no-cache build-base git npm python
WORKDIR /app
# install hex + rebar
RUN mix local.hex --force && \
mix local.rebar --force
ENV MIX_ENV=${MIX_ENV}
ENV SECRET_KEY_BASE=${SECRET_KEY_BASE}
COPY mix.exs mix.lock ./
COPY config config
RUN mix do deps.get, deps.compile
COPY assets/package.json assets/package-lock.json ./assets/
RUN npm --prefix ./assets ci --progress=false --no-audit --loglevel=error
COPY priv priv
COPY assets assets
RUN npm run --prefix ./assets deploy
RUN mix phx.digest
COPY lib lib
RUN mix do compile, release
FROM alpine:3.9 AS app
ARG MIX_ENV
RUN apk add --no-cache openssl ncurses-libs
WORKDIR /app
RUN chown nobody:nobody /app
USER nobody:nobody
COPY --from=build --chown=nobody:nobody /app/_build/${MIX_ENV}/rel/ecs_app ./
ENV HOME=/app
CMD ["bin/ecs_app", "start"]</code></pre> <h2> Build Configuraton</h2> <p> I like to create a <code class="inline">Makefile</code> for building my Docker images and pushing them to ECR. Note the <code class="inline">your_ecr_url</code> is the url of your ECR that was created in Part 1.</p> <pre><code class="Makefile">APP_NAME ?= `grep 'app:' mix.exs | sed -e 's/\[//g' -e 's/ //g' -e 's/app://' -e 's/[:,]//g'` APP_VSN ?= `grep 'version:' mix.exs | cut -d '"' -f2`
BUILD ?= `git rev-parse --short HEAD`
build_local:
docker build --build-arg APP_VSN=$(APP_VSN) \
--build-arg MIX_ENV=prod \
--build-arg SECRET_KEY_BASE=$(SECRET_KEY_BASE) \
-t $(APP_NAME):$(APP_VSN) .
build:
docker build --build-arg APP_VSN=$(APP_VSN) \
--build-arg MIX_ENV=prod \
--build-arg SECRET_KEY_BASE=$(SECRET_KEY_BASE) \
-t your_ecr_url:$(APP_VSN)-$(BUILD) \
-t your_ecr_url:latest .
push:
eval `aws ecr get-login --no-include-email --region us-east-1`
docker push your_ecr_url:$(APP_VSN)-$(BUILD)
docker push your_ecr_url:latest
deploy:
./bin/ecs-deploy -c your_cluster_name -n your_service_name -i your_ecr_url:$(APP_VSN)-$(BUILD) -r us-east-1 -t 300
</code></pre> <p> For this to work, you’ll need to set an environment variable <code class="inline">SECRET_KEY_BASE</code> which you can generate with <code class="inline">mix phx.gen.secret</code>.</p> <p> Assuming you have docker on your computer, you can now run <code class="inline">make build_local</code> and it should build and package a production release docker image. And it’s always a good idea to try it out locally before deploying:</p>
<pre><code class="bash">$ docker run -p 4000:4000 -it ecs_app:0.1.0</code></pre> <p> You should be able to hit <a href="http://localhost:4000">http://localhost:4000</a> now.</p> <p> The <code class="inline">push</code> task will require that you have the AWS Cli installed on your computer and your AWS access_key and secret setup correctly. See the docs to set it up locally.</p> <p> <a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html">AWS CLI Userguide</a></p> <p> For the <code class="inline">deploy</code> step, I reference a script at <code class="inline">./bin/ecs-deploy</code>. You can get this script on github. Create a folder at the root of your project called <code class="inline">bin</code> and place the <code class="inline">ecs-deploy</code> script in it. This will require the same AWS authentication as the <code class="inline">push</code> task. It also requires that you have <code class="inline">jq</code> installed on your system and may require you to set the execution bit on the file <code class="inline">chmod +X ./bin/ecs-deploy</code>.</p> <p> <a href="https://github.com/silinternational/ecs-deploy">silinternational/ecs-deploy</a></p> <h2> Deploy!</h2> <p> Now that we have a simple project, lets get it deployed to ECS. Assuming you have your AWS credentials setup correctly, you should be able to run the following commands in order:</p> <ol> <li>
<code class="inline">make build</code> - builds and tags a docker image </li>
<li>
<code class="inline">make push</code> - pushs that image to your private docker repository </li>
<li>
<code class="inline">make deploy</code> - instructs ECS to create a new task definition with your latest image and start running it </li> </ol> <p> The deploy task can take some time. It trys to verify that the task is running and that the previous task is stopped. You can now browse to the ECS web console and watch the progress of your task starting.</p> <p> If everything worked correctly, you should be able to browse to the Load Balancer URL and see the default Phoenix welcome screen!</p> <h2> Github Actions</h2> <p> It’s great that we can build and deploy the app locally, now lets automate the deployment process with Github Actions.</p> <p> We’re going to create one workflow that does three jobs:</p> <ol> <li>
Run Tests </li>
<li>
Build and push the docker image </li>
<li>
Deploy to ECS </li> </ol> <p> Steps 1 and 2 will run in parallel and step 3 will run only if 1 and 2 are both successful.</p> <p> Create a new file at <code class="inline">.github/workflows/ci.yml</code> with the following content:</p> <pre><code class="yaml">name: ECS DEPLOYMENT
on:
push:
branches: [ main ] #i renamed my master branch to main
jobs:
test:
name: Run Tests
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: main
- uses: actions/cache@v2
with:
path: deps
key: ${{ runner.os }}-mix-${{ hashFiles(format('{0}{1}', github.workspace, '/mix.lock')) }}
restore-keys: |
${{ runner.os }}-mix-
- name: Set up Elixir
uses: actions/setup-elixir@v1
with:
elixir-version: '1.10.3'
otp-version: '22.3'
- name: Install dependencies
run: mix deps.get
- name: Run tests
run: MIX_ENV=test mix do compile, test
build:
name: Build And Push Container
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: main
- uses: actions/cache@v2
with:
path: deps
key: ${{ runner.os }}-mix-${{ hashFiles(format('{0}{1}', github.workspace, '/mix.lock')) }}
restore-keys: |
${{ runner.os }}-mix-
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Build Docker Image
run: make build
env:
SECRET_KEY_BASE: ${{ secrets.SECRET_KEY_BASE }}
- name: Push Docker Image
run: make push
deploy:
name: Deploy
runs-on: ubuntu-latest
needs: [test, build]
steps:
- uses: actions/checkout@v2
with:
ref: main
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Deploy
run: make deploy</code></pre> <p> You’ll notice that there are references to three different <code class="inline">${{secrets}}</code>. You can set these in your Github repos Settings page. There is section there called secrets, just add the three secrets and this build will have access.</p> <p> Now push your code to the repo and your <code class="inline">ci</code> action should test, build and deploy your code to ECS. You can watch the progress in the Actions tab of your Github repo.</p> <p> Verify this by going to <code class="inline">your-lb-url.com/health</code> to see the version and node name of your app.</p> <h2> Wrap Up</h2> <p> Now there is a reproducible infrastructure definition, and its being deployed on a push to repository. Most projects would probably be done at this point.</p> <p> In Part 3, I’ll show you how to use ECS Service Discovery to build a distributed cluster on ECS.</p> <p> <a href="/posts/deploying-elixir-on-ecs-part-1">Part 1 - using Terraform to describe and build the infrastructure</a>
<a href="/posts/deploying-elixir-on-ecs-part-3">Part 3 - using ECS Service Discovery to build a distributed Elixir cluster</a></p></p>
Matt Silbernagelhttps://silbernagel.dev/posts/deploying-elixir-on-ecs-part-2Deploying Elixir to ECS - Part 22020-10-01T00:50:47.077764Z<ul>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-2">Part 2 - building and deploying a docker image to ECS</a> </li>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-3">Part 3 - using ECS Service Discovery to build a distributed Elixir cluster</a> </li></ul>
<p>
I love PaaS systems for deploying simple Elixir web services. It makes the deployment relatively painless, but it limits the power of the BEAM by making it hard to impossible to do distributed clustering. For a project that requires distribution, ECS is a good option. This series of posts will layout how to build the infrastructure, setup CI/CD and connect the Elixir nodes into a distributed cluster.</p> <h1> The infrastructure</h1> <p> Below I’ve split the terraform into sections and talk through each one. Installing and configuring terraform for your AWS account is outside the scope of this article, but the HashiCorp site provides a great introduction.</p> <p> <a href="https://learn.hashicorp.com/collections/terraform/aws-get-started">HashiCorp - Getting started with AWS</a></p> <h2> Initialize Terraform</h2> <p> To start with, you’ll need to tell terraform that you want to use the AWS provider. Add this to a file called <code class="inline">main.tf</code> and run <code class="inline">terraform init</code>.</p> <pre><code class="hcl">provider aws { profile = "default"
region = "us-east-1"
}</code></pre> <blockquote> <p>
I typically keep my terraform files in an <code class="inline">infrastructure</code> folder in the root of my project </p> </blockquote> <h2> Add a VPC</h2> <p> One requirement for ECS is a VPC. Most likely, you’ll want to build a new VPC and use that, but for brevity you can just import the default VPC that comes with your AWS account. In the AWS console, go to VPC’s and find your default VPC’s id, it’ll start with <code class="inline">vpc-</code>, and also find the CIDR block.</p> <p> Add to your terraform file:</p> <pre><code class="hcl">resource aws_vpc main { cidr_block = "your_vpc_CIDR_block"
tags = {
Name = "Default VPC"
}
}
data aws_subnet_ids vpc_subnets {
vpc_id = aws_vpc.main.id
}
data aws_subnet default_subnet {
count = "${length(data.aws_subnet_ids.vpc_subnets.ids)}"
id = "${tolist(data.aws_subnet_ids.vpc_subnets.ids)[count.index]}"
}
</code></pre> <p> Save and run <code class="inline">terraform import aws_vpc.main your_vpc_id</code> and then <code class="inline">terraform apply</code> to pull all of the subnets which are needed for subsequent tasks.</p> <p> This should import the current state of your default VPC and allow you to pass it around to other terraform modules.</p> <h2> Build the container repo</h2> <p> You’ll need a place to upload your container to so that ECS can pull it in. AWS offers ECR (Elastic Container Registry) which is essentially a private docker repo.</p> <p> To create the registry add to your terraform:</p> <pre><code class="hcl">resource "aws_ecr_repository" "repo" { name = "your_repo" # give this a better name
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
output repo_url {
value = aws_ecr_repository.repo.repository_url
}</code></pre> <p> This creates a place to push our images to from our CI/CD process.
Notice the output is the URL of the created repository. This will be important later when we talk about deployment.</p> <h2> Build the ALB (Application Load Balancer)</h2> <p> This will be the public entry point to your web service, and will direct traffic to one of your many containers. To make things easier, this shows how to allow port 80 traffic, but I’ve commented in the locations that would require a code change for port 443.</p> <blockquote> <p>
If you want to use SSL, you’ll need to generate a certificate for your domain name. If you manage your domain with Route53, this is easy enough to do in AWS Certificate Manager. </p> </blockquote> <pre><code class="hcl"># configure the ALB target group resource aws_lb_target_group lb_target_group {
name = "your-app-tg" # choose a name that makes sense
port = 4000 # Expose port 4000 from our container
protocol = "HTTP"
vpc_id = aws_vpc.main.id # our default vpc id
target_type = "ip"
health_check {
path = "/health"
port = "4000"
}
stickiness {
type = "lb_cookie"
enabled = "true"
cookie_duration = "3600"
}
}
resource aws_lb_listener ecs_listener {
load_balancer_arn = "${aws_lb.load_balancer.arn}"
port = "80" # 443 if using SSL
protocol = "HTTP" # HTTPS if using SSL
# uncomment following lines if using SSL
# ssl_policy = "ELBSecurityPolicy-2016-08"
# certificate_arn = "" # the ARN a valid cert from Certificate Manager
default_action {
type = "forward"
target_group_arn = "${aws_lb_target_group.lb_target_group.arn}"
}
}
resource aws_lb load_balancer {
name = "${var.app_name}_lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.lb_security_group.id]
subnets = data.aws_subnet.default_subnet.*.id
enable_deletion_protection = true
}
# needed to allow web traffic to hit the ALB
resource aws_security_group lb_security_group {
name = "lb_security_group"
description = "Allow all outbound traffic and https inbound"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP" # use HTTPS if ssl is enabled
from_port = 80 # use 443 if ssl is enabled
to_port = 80 # use 443 if ssl is enabled
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# the url where you app will be accessible
output dns {
value = aws_lb.load_balancer.dns_name
}
</code></pre> <h2> Configure ECS</h2> <p> And now finally our ECS configuration. ECS has the concept of Clusters which are groups of Services which run one or more instances of a Task which is defined by a TaskDefinition. The following configuration will build one cluster that has one service that runs two instances of a task.</p> <h3> Task Definition</h3> <p> The Task Definition is basically a description of how to run your container. Later on when we deploy, we’ll create new versions of this initial Task Definition that point to different versions of your docker image. We can then instruct the ECS service to use our new Task Definition and start new tasks with newer versions of our code.</p> <p> The Task Definition will also need some roles created.</p> <ul> <li>
The ecs execution role is what is used when the task starts. It needs access to the repository and logs. </li>
<li>
The ecs role is what the task runs under. It is what you can use if you need your task to access other AWS services like S3. </li> </ul> <p> And we’ll also need to create the log group so the task can log output.</p> <pre><code class="hcl"># this may need to change depending # on how often you run this
variable task_version {
default = 1
}
# this is the role that your container runs as
# you can give it permissions to other parts of AWS that it may need to access
# like S3 or DynamoDB for instance.
resource aws_iam_role ecs_role {
name = "ecs_role"
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
# this role and the following permissions are required
# for the ECS service to pull the container from ECR
# and write log events
resource aws_iam_role ecs_execution_role {
name = "ecs_execution_role"
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource aws_iam_policy ecs_policy {
name = "ecs_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
EOF
}
resource aws_iam_policy_attachment attach_ecs_policy {
name = "attach-ecs-policy"
roles = [aws_iam_role.ecs_execution_role.name]
policy_arn = aws_iam_policy.ecs_policy.arn
}
resource aws_cloudwatch_log_group log_group {
name = "/ecs/your_app"
}
resource aws_ecs_task_definition task_definition {
family = "your_app_task"
task_role_arn = aws_iam_role.ecs_role.arn
execution_role_arn = aws_iam_role.ecs_execution_role.arn
requires_compatibilities = ["FARGATE"]
memory = 8192
cpu = 4096
network_mode = "awsvpc"
container_definitions = <<-EOF
[
{
"cpu": 0,
"image": "${aws_ecr_repository.repo.repository_url}:latest",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${aws_cloudwatch_log_group.log_group.name}",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"hostPort": 4000,
"protocol": "tcp",
"containerPort": 4000
}
],
"environment": [],
"mountPoints": [],
"volumesFrom": [],
"essential": true,
"links": [],
"name": "your_app"
}
]
EOF
}</code></pre> <h3> Cluster and Service</h3> <p> These are pretty easy. We just need to</p> <ul> <li>
create the service and tell it about the task and load balancer </li>
<li>
create a security group to allow traffic out to the world and in from our VPC </li>
<li>
create a cluster </li> </ul> <pre><code class="hcl"> # this gets your AWS account id
# needed to build the task ARN later
data "aws_caller_identity" "current" {}
resource aws_ecs_cluster ecs_cluster {
name = "your_app_cluster"
}
resource aws_ecs_service service {
name = "your_app_service"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = "arn:aws:ecs:us-east-1:${data.aws_caller_identity.current.account_id}:task-definition/${aws_ecs_task_definition.task_definition.family}:${var.task_version}"
desired_count = 2
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.security_group.id]
subnets = data.aws_subnet.default_subnet.*.id
assign_public_ip = true # this seems to be required to access the container repo
}
load_balancer {
target_group_arn = aws_lb_target_group.lb_target_group.arn
container_name = "your_app"
container_port = "4000"
}
}
# needed that that our container can access the outside world
# and traffic in your VPC can access the containers
resource aws_security_group security_group {
name = "your_app_ecs"
description = "Allow all outbound traffic"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP/S Traffic"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = [aws_vpc.main.cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}</code></pre> <h2> The final file</h2> <p> Assuming you have the permission, you should be able <code class="inline">terraform plan</code> and <code class="inline">terraform apply</code> the following file.</p> <pre><code>provider aws { profile = "default"
region = "us-east-1"
}
variable app_name {
default = "ecs_app"
}
variable task_version {
default = 1
}
resource aws_vpc main {
cidr_block = "172.31.0.0/16"
tags = {
Name = "Default VPC"
}
}
resource "aws_ecr_repository" "repo" {
name = "${var.app_name}_repo"
image_tag_mutability = "MUTABLE"
image_scanning_configuration {
scan_on_push = true
}
}
data aws_subnet_ids vpc_subnets {
vpc_id = aws_vpc.main.id
}
data aws_subnet default_subnet {
count = "${length(data.aws_subnet_ids.vpc_subnets.ids)}"
id = "${tolist(data.aws_subnet_ids.vpc_subnets.ids)[count.index]}"
}
data "aws_caller_identity" "current" {}
resource aws_lb_target_group lb_target_group {
name = "ecs-app-tg"
port = 4000
protocol = "HTTP"
vpc_id = aws_vpc.main.id
target_type = "ip"
health_check {
path = "/health"
port = "4000"
}
stickiness {
type = "lb_cookie"
enabled = "true"
cookie_duration = "3600"
}
}
resource aws_lb_listener ecs_listener {
load_balancer_arn = aws_lb.load_balancer.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.lb_target_group.arn
}
}
resource aws_lb load_balancer {
name = "ecs-app-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.lb_security_group.id]
subnets = data.aws_subnet.default_subnet.*.id
enable_deletion_protection = true
}
resource aws_security_group lb_security_group {
name = "lb_security_group"
description = "Allow all outbound traffic and https inbound"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource aws_ecs_cluster ecs_cluster {
name = "${var.app_name}_cluster"
}
resource aws_ecs_task_definition task_definition {
family = "${var.app_name}_task"
task_role_arn = aws_iam_role.ecs_role.arn
execution_role_arn = aws_iam_role.ecs_execution_role.arn
requires_compatibilities = ["FARGATE"]
memory = 8192
cpu = 4096
network_mode = "awsvpc"
container_definitions = <<-EOF
[
{
"cpu": 0,
"image": "${aws_ecr_repository.repo.repository_url}:latest",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "${aws_cloudwatch_log_group.log_group.name}",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
},
"portMappings": [
{
"hostPort": 4000,
"protocol": "tcp",
"containerPort": 4000
}
],
"environment": [],
"mountPoints": [],
"volumesFrom": [],
"essential": true,
"links": [],
"name": "${var.app_name}"
}
]
EOF
}
resource aws_ecs_service service {
name = "${var.app_name}_service"
cluster = aws_ecs_cluster.ecs_cluster.id
task_definition = "arn:aws:ecs:us-east-1:${data.aws_caller_identity.current.account_id}:task-definition/${aws_ecs_task_definition.task_definition.family}:${var.task_version}"
desired_count = 1
launch_type = "FARGATE"
network_configuration {
security_groups = [aws_security_group.security_group.id]
subnets = data.aws_subnet.default_subnet.*.id
assign_public_ip = true
}
load_balancer {
target_group_arn = aws_lb_target_group.lb_target_group.arn
container_name = var.app_name
container_port = "4000"
}
}
resource aws_security_group security_group {
name = var.app_name
description = "Allow all outbound traffic"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTP/S Traffic"
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = [aws_vpc.main.cidr_block]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
resource aws_iam_role ecs_role {
name = "ecs_role"
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource aws_iam_role ecs_execution_role {
name = "ecs_execution_role"
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource aws_iam_policy ecs_policy {
name = "ecs_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
EOF
}
resource aws_iam_policy_attachment attach_ecs_policy {
name = "attach-ecs-policy"
roles = [aws_iam_role.ecs_execution_role.name]
policy_arn = aws_iam_policy.ecs_policy.arn
}
resource aws_cloudwatch_log_group log_group {
name = "/ecs/${var.app_name}"
}
output repo_url {
value = aws_ecr_repository.repo.repository_url
}
output dns {
value = aws_lb.load_balancer.dns_name
}</code></pre> <h1> Wrap up</h1> <p> With the provided terraform file, you should be able to get the infrastructure setup. Of course, there is no image to pull and run yet, so ECS will likely try several times and fail.</p> <p> In Part 2 we’ll push a Docker container with a simple Phoenix app to our private image repo and instruct ECS to pull and run it.</p> <ul> <li>
<a href="/posts/deploying-elixir-on-ecs-part-2">Part 2 - building and deploying a docker image to ECS</a> </li>
<li>
<a href="/posts/deploying-elixir-on-ecs-part-3">Part 3 - using ECS Service Discovery to build a distributed Elixir cluster</a> </li> </ul></p>
Matt Silbernagelhttps://silbernagel.dev/posts/deploying-elixir-on-ecs-part-1Deploying Elixir to ECS - Part 12020-09-23T00:50:47.077772Z