Full-Stack & Ops

CONTINUOUS INTEGRATION

  1. 1.
    Travis
  2. 2.
    Circle CI
  3. 3.
    TeamCity
  4. 4.
    Jenkins
  5. 5.
    Github Actions

PACKAGE REPOSITORIES

  1. 1.
    Pypi - public
  2. 2.
    Gemfury - private

DEVOPS / SRE

Docker

Kubernetes

  • For beginners:
  • Advanced
    • 1, 2, 3,

Helm

Kubeflow

MiniKF

S2i

  • Builds docker images out of gits

Terraform

AirFlow

Prefect

Seldon

Tutorials

AWS Lambda

rabbitMQ

ActiveMQ

- Apache ActiveMQ™ is the most popular open source, multi-protocol, Java-based messaging serve

Kafka

Also, the routing logic of AMQP can be fairly complicated as opposed to Apache Kafka. For instance, each consumer simply decides which messages to read in Kafka.
In addition to message routing simplicity, there are places where developers and DevOps staff prefer Apache Kafka for its high throughput, scalability, performance, and durability; although, developers still swear by all three systems for various reasons.

Zoo keeper

KSQLDB

  • KSQLDB 101 uses kafka streams to run queries over kafka, youtube

ELK

Logz.io

Sentry

  • For python, Your code is telling you more than what your logs let on. Sentry’s full stack monitoring gives you full visibility into your code, so you can catch issues before they become downtime.

Kafka for DS

Redis for DS

  1. 1.
    What is, vs memcached
  2. 4.
    Note: redis is a managed dictionary its strength lies when you have a lot of data that needs to be queries and managed and you don’t want to hard code it, for example.

Statsd

FastAPI

SnowFlake / Redshift

Programming Concepts

Dependency injection - based on SOLID the class should do one thing, so we are letting other classes create 3rd party/class objects for us instead of doing it internally, either by init passing or by injecting in runtime.
SOLID - the five principles of object oriented.

Visualization

Plotly for jupyter lab “jupyter labextension install @jupyterlab/plotly-extension”

Serving Models

  1. 1.
    ML SYSTEM DESIGN PATTERNS, res, git
  2. 2.
    Seldon
  3. 5.
    Dapr is a portable, serverless, event-driven runtime that makes it easy for developers to build resilient, stateless and stateful microservices that run on the cloud and edge and embraces the diversity of languages and developer frameworks.
Dapr codifies the best practices for building microservice applications into open, independent, building blocks that enable you to build portable applications with the language and framework of your choice. Each building block is independent and you can use one, some, or all of them in your application.

EXPERIMENT MANAGEMENT

  1. 2.
    Cnvrg.io -
    1. 1.
      Manage - Easily navigate machine learning with dashboards, reproducible data science, dataset organization, experiment tracking and visualization, a model repository and more
    2. 2.
      Build - Run and track experiments in hyperspeed with the freedom to use any compute environment, framework, programming language or tool - no configuration required
    3. 3.
      Automate - Build more models and automate your machine learning from research to production using reusable components and drag-n-drop interface
  2. 3.
    Comet.ml - Comet lets you track code, experiments, and results on ML projects. It’s fast, simple, and free for open source projects.
  3. 4.
    Floyd - notebooks on the cloud, similar to colab / kaggle, etc. gpu costs 4$/h
  4. 6.
    Missing link - RIP
  5. 8.
    Databricks
    1. 1.
      Koalas - pandas API on Apache Spark
    2. 2.
      Intro to DB on spark, has some basic sklearn-like tool and other custom operations such as single-vector-based aggregator for using features as an input to a model
    3. 3.
    4. 6.
      Documentations (read me, has all libraries)
    5. 7.
      Medium tutorial, explains the 3 pros of DB with examples of using with native and non native algos
      1. 1.
        Spark sql
      2. 2.
        Mlflow
      3. 3.
        Streaming
      4. 4.
        SystemML DML using keras models.
    6. 10.
      Utilizing spark nodes for grid searching with sklearn
      1. 1.
        from spark_sklearn import GridSearchCV
    7. 11.
      How can we leverage our existing experience with modeling libraries like scikit-learn? We'll explore three approaches that make use of existing libraries, but still benefit from the parallelism provided by Spark.
These approaches are:
  • Grid Search
  • Cross Validation
  • Sampling (random, chronological subsets of data across clusters)
  1. 1.
    Github spark-sklearn (needs to be compared to what spark has internally)
    1. 1.
      Ref: It's worth pausing here to note that the architecture of this approach is different than that used by MLlib in Spark. Using spark-sklearn, we're simply distributing the cross-validation run of each model (with a specific combination of hyperparameters) across each Spark executor. Spark MLlib, on the other hand, will distribute the internals of the actual learning algorithms across the cluster.
    2. 2.
      The main advantage of spark-sklearn is that it enables leveraging the very rich set of machine learning algorithms in scikit-learn. These algorithms do not run natively on a cluster (although they can be parallelized on a single machine) and by adding Spark, we can unlock a lot more horsepower than could ordinarily be used.
    3. 3.
      Using spark-sklearn is a straightforward way to throw more CPU at any machine learning problem you might have. We used the package to reduce the time spent searching and reduce the error for our estimator
  2. 5.
    1. 2.
      Medium and sklearn random trees

API GATEWAY

  1. 1.
    what is

NGINX

  1. 1.
    NGINX is open source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.
  2. 2.
    Cloudflare on what is a reverse proxy
  3. 3.
    "One advantage of using NGINX as an API gateway is that it can perform that role while simultaneously acting as a reverse proxy, load balancer, and web server for existing HTTP traffic. If NGINX is already part of your application delivery stack then it is generally unnecessary to deploy a separate API gateway"