Data Science Tools

Async io
  1. 1.
    ​Intro​
  2. 2.
    ​complete​
Clean code:

  1. 1.
    ​Installing pyenv​
  2. 2.
    ​Intro to pyenv​
  3. 5.
    pyenv virtualenv

(how does reshape work?) - a shape of (2,4,6) is like a tree of 2->4 and each one has more leaves 4->6.
As far as i can tell, reshape effectively flattens the tree and divide it again to a new tree, but the total amount of inputs needs to stay the same. 2*4*6 = 4*2*3*2 for example
code: import numpy rng = numpy.random.RandomState(234) a = rng.randn(2,3,10) print(a.shape) print(a) b = numpy.reshape(a, (3,5,-1)) print(b.shape) print (b)
How to add extensions to jupyter: extensions​

  1. 1.
    ​Optimization problems, a nice tutorial to finding the minima
  2. 2.
    ​Minima / maxima finding it in a 1d numpy array

​Using numpy efficiently - explaining why vectors work faster. Fast vector calculation, a benchmark between list, map, vectorize. Vectorize wins. The idea is to use vectorize and a function that does something that may involve if conditions on a vector, and do it as fast as possible.

  1. 1.
    ​Great introductory tutorial about using pandas, loading, loading from zip, seeing the table’s features, accessing rows & columns, boolean operations, calculating on a whole row\column with a simple function and on two columns even, dealing with time\date parsing.
  2. 4.
  3. 6.
    def mask_with_values(df): mask = df['A'].values == 'foo' return df[mask]
  4. 8.
    ​Accessing dataframe rows, columns and cells- by name, by index, by python methods.
  5. 11.
    ​Dealing with time series in pandas,
    1. 1.
      ​Create a new column based on a (boolean or not) column and calculation:
    2. 2.
      Using python (map)
    3. 3.
      Using numpy
    4. 4.
      using a function (not as pretty)
  6. 12.
    Given a DataFrame, the shift() function can be used to create copies of columns that are pushed forward (rows of NaN values added to the front) or pulled back (rows of NaN values added to the end).
    1. 1.
      df['t'] = [x for x in range(10)]
    2. 2.
      df['t-1'] = df['t'].shift(1)
    3. 3.
      df['t-1'] = df['t'].shift(-1)
  7. 14.
    ​Dataframe Validation In Python - A Practical Introduction - Yotam Perkal - PyCon Israel 2018
  8. 15.
    In this talk, I will present the problem and give a practical overview (accompanied by Jupyter Notebook code examples) of three libraries that aim to address it: Voluptuous - Which uses Schema definitions in order to validate data [https://github.com/alecthomas/voluptuous] Engarde - A lightweight way to explicitly state your assumptions about the data and check that they're actually true [https://github.com/TomAugspurger/engarde] * TDDA - Test Driven Data Analysis [ https://github.com/tdda/tdda]. By the end of this talk, you will understand the Importance of data validation and get a sense of how to integrate data validation principles as part of the ML pipeline.
  9. 16.
    ​Stop using itterows, use apply.
  10. 20.
    ​json_normalize()​

  1. 1.
    ​Pandas summary​
  2. 2.
  3. 3.
    ​Sweetviz - "Sweetviz is an open-source Python library that generates beautiful, high-density visualizations to kickstart EDA (Exploratory Data Analysis) with just two lines of code. Output is a fully self-contained HTML application.
    The system is built around quickly visualizing target values and comparing datasets. Its goal is to help quick analysis of target characteristics, training vs testing data, and other such data characterization tasks."
by Sweetviz​

​by Jeremy Chow​
  1. 1.
  2. 3.
    SCI-KIT LEARN
  3. 4.
    Pipeline to json 1, 2​
  4. 5.
    ​cuML - Multi gpu, multi node-gpu alternative for SKLEARN algorithms
  5. 6.
    ​Gpu TSNE ^​
  6. 7.
    ​Awesome code examples about using svm\knn\naive\log regression in sklearn in python, i.e., β€œfitting a model onto the data”
Also Insanely fast, see here.
  1. 1.
    ​Functional api for sk learn, using pipelines. thank you sk-lego.
  2. 2.
    ​
    ​
    ​
    ​
    Images by SK-Lego

  1. 1.
    ​Medium on all fast.ai courses, 14 posts

​1. What is? by vidhaya - PyCaret is an open-source, machine learning library in Python that helps you from data preparation to model deployment. It is easy to use and you can do almost every data science project task with just one line of code.

  1. 1.
    ​understanding git​
  2. 2.
    ​pre-commit​
  3. 4.
    ​Installing git LFS​
  4. 5.
    ​Use git lfs​
  5. 6.
    ​Download git-lfs​
  6. 7.
    ​Git wip (great)
    ​
    ​
Copy link
On this page
Python
Virtual Environments
JUPYTER
SCIPY
NUMPY
PANDAS
Exploratory Data Analysis (EDA)
TIMESERIES
FAST.AI
PYCARET
NVIDIA TF CUDA CUDNN
GCP
GIT / Bitbucket