The above diagram, generated here, details our Kubernetes clusters. We maintain two, one for staging and one for production. Each cluster is deployed on GKE in the Las Vegas region in its own GCP project.
In the diagram above, we highlight the production GKE cluster and the processes deployed from packages built in this repo or vendor-managed software. Let's traverse this diagram starting from the Load Balancer, which is where users interface with Neon Law software.
The Load Balancer is managed by Google Cloud. By enforcing all ingress traffic to funnel through the Load Balancer we can leverage the benefits of this managed service including SSL encryption, WAF, and CDN support.
Kubernetes Ingress within our cluster is managed by GKE and GCP. This way we can
leverage GCP SSL Certs and IAP for accessing internal company resources. We
have three defined ingress rules,
Confluent Command Center and
Confluent Command Center
This service points to a managed Apache Superset offering. Like Neo4j and every other external software ran in this Kubernetes cluster, it is installed with its Helm chart and managed in our Terraform scripts. You can visit our Superset instance at https://superset.neonlaw.com (our staging Superset exists at https://superset.neonlaw.net).
Superset is an open-source business intelligence tool that provides us with internal data snapshots and visualizations. In each Superset instance, we plug in our datasources.
We currently use Google Cloud's managed PostgreSQL offering. Everything else is deployed in Helm Charts in our database. We use Neo4j, Elasticsearch and Kafka. Each can be completely regenerated with data stored in our PostgreSQL database, except for files in Blob Storage.
We use Google Cloud Storage Buckets to store files. All access to this blob storage is stored in a separate GCP Audit account so we have a full accounting of who saw what file. This is important for conflicts.
We use Loglare to collect hooks from pods and post them to BigQuery. We also use Logflare as a server for Webhooks for ingesting third-party data.
Some of our applications depend on authentication tokens which we use Auth0 for. After a user authenticates with Auth0, they have a limited-access token to access our service-based deployments.
Deployments without services listen to changes in one of our Databases, namely
Kafka and process data there. These include our