Forem Creators and Builders 🌱

GPU accelerated Deep Learning in JavaScript

  • JS-PyTorch is a Deep Learning JavaScript library built from scratch, to closely follow PyTorch's syntax.
  • Feel free to try out a Web Demo!

 

1. TL;DR

  • In this article, we will cover simple instructions and use-cases for JS-PyTorch.

Note: The project's Documentation contains details on all available operations and layers.

 

2. Running it Yourself

Install & Import

To start off, you can install the package locally running npm install js-pytorch on the terminal.
Then, on your JavaScript file, import the package with:

const torch = require("js-pytorch");
Enter fullscreen mode Exit fullscreen mode

 

Create Tensors

To use all of these cool deep learning Tensor Operations, we need to instantiate some Tensors:

// Instantiate Tensors:
let x = torch.randn([8,4,5]);
let w = torch.randn([8,5,4], requires_grad = true, device = 'gpu');
let b = torch.tensor([0.2, 0.5, 0.1, 0.0], requires_grad = true, device = 'gpu');
Enter fullscreen mode Exit fullscreen mode

The syntax is the same as PyTorch's:

  • torch.tensor(Array) recieves an array and turns it into a Tensor.
  • torch.randn([shape]) creates a Tensor filled with normally-distributed random numbers, with the provided shape.
  • The requires_grad argument is set to true if we want to optimize this parameter (by tracking it's gradients).

 

Tensor Operations

Now, let's run some operations on these Tensors:

// Make calculations:
let out = torch.matmul(x, w);
out = torch.add(out, b);
Enter fullscreen mode Exit fullscreen mode
  • torch.matmul(x, w) performs matrix multiplication between x and w (just like in PyTorch).
  • torch.add(out, b) adds both Tensors.

Note: As w has require_grad set to true, its children out will also have it's gradients tracked.

 

Getting Gradients

// Compute gradients on whole graph:
out.backward();

// Get gradients from specific Tensors:
console.log(w.grad);
console.log(b.grad);
Enter fullscreen mode Exit fullscreen mode
  • Calling out.backward() calculates the gradients of every Tensor that let to it (its parents), relative to out.
  • IRL, we will call .backward() on the loss Tensor, to get the gradients necessary to reduce it.
  • To access a Tensor's gradients, simply call Tensor.grad.

 

3. Full Example (Neural Network):

In this example, we implement a full Neural Network, with three Linear layers, and ReLU activations. The syntax for the nn.Module class is identical to PyTorch's.

const torch = require("js-pytorch");
const nn = torch.nn;
const optim = torch.optim;

// Implement Module class:
class NeuralNet extends nn.Module {
  constructor(in_size, hidden_size, out_size) {
    super();
    // Instantiate Neural Network's Layers:
    this.w1 = new nn.Linear(in_size, hidden_size);
    this.relu1 = new nn.ReLU();
    this.w2 = new nn.Linear(hidden_size, hidden_size);
    this.relu2 = new nn.ReLU();
    this.w3 = new nn.Linear(hidden_size, out_size);
  };

  forward(x) {
    let z;
    z = this.w1.forward(x);
    z = this.relu1.forward(z);
    z = this.w2.forward(z);
    z = this.relu2.forward(z);
    z = this.w3.forward(z);
    return z;
  };
};

// Instantiate Model:
let in_size = 16;
let hidden_size = 32;
let out_size = 10;
let batch_size = 16;

let model = new NeuralNet(in_size,hidden_size,out_size);

// Define loss function and optimizer:
let loss_func = new nn.CrossEntropyLoss();
let optimizer = new optim.Adam(model.parameters(), 3e-3);

// Instantiate input and output:
let x = torch.randn([batch_size, in_size]);
let y = torch.randint(0, out_size, [batch_size]);
let loss;

// Training Loop:
for (let i = 0; i < 256; i++) {
  let z = model.forward(x);

  // Get loss:
  loss = loss_func.forward(z, y);

  // Backpropagate the loss using torch.tensor's backward() method:
  loss.backward();

  // Update the weights:
  optimizer.step();

  // Reset the gradients to zero after each training step:
  optimizer.zero_grad();

  // Print current loss:
  console.log(`Iter: ${i} - Loss: ${loss.data}`);
}
Enter fullscreen mode Exit fullscreen mode

 

4. Conclusion

  • Hope you enjoyed the package!
  • Feel free to contribute or reach out if you have any questions.

Top comments (0)