Hirejobs Canada
Register
Auckland Jobs
Canterbury Jobs
Northland Jobs
Otago Jobs
Southland Jobs
Tasman Jobs
Wellington Jobs
West Coast Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

ML Compiler Intern - Jobs in Toronto, ON

Job LocationToronto, ON
EducationNot Mentioned
SalaryNot Disclosed
IndustryNot Mentioned
Functional AreaNot Mentioned
Job TypeFull Time

Job Description

About usd-Matrix is developing a novel hardware system and a full-stack software solution to accelerate large-scale modern deep neural network compute workloads for the cloud. Leveraging a combination of unique in-memory compute array design, digital signal processing system design, and on-chip and chip-to-chip interconnect fabric, d-Matrixs AI compute engine holds the promise of drastically improving power-efficiency and compute latency of cloud inference workloads, by orders of magnitude compared to competition.Why d-MatrixWe want to build a company and a culture that sustains the tests of time. We offer the candidate a very unique opportunity to express themselves and become a future leader in an industry that will have a huge influence globally. We are striving to build a culture of transparency, inclusiveness and intellectual honesty while ensuring all our team members are always learning and having fun on the journey. We have built the industry’s first highly programmable in-memory computing architecture that applies to a broad class of applications from cloud to edge. The candidate will get to work on a path breaking architecture with a highly experienced team that knows what it takes to build a successful business.The role: ML Compiler InternThe Compiler Backend Team at d-Matrix is responsible for developing the software that performs the logical-to-physical mapping of a graph expressed in an IR dialect (like Tensor Operator Set Architecture (TOSA), MHLO or Linalg) to the physical architecture of the distributed parallel memory accelerator used to execute it. It performs multiple passes over the IR to apply operations like tiling, compute resource allocation, memory buffer allocation, scheduling and code generation. You will be joining a team of exceptional people enthusiastic about developing state-of-the-art ML compiler technology. This internship position is for 3 months.In this role you will design, implement and evaluate a method for managing floating point data types in the compiler. You will work under the guidance of two members of the compiler backend team. One, is an experienced compiler developer based in the West Coast of the US.You will engage and collaborate with engineering team in the US to understand the mechanisms made available by the hardware design to perform efficient floating point operations using reduced precision floating point data types.Successful completion of the project will be demonstrated by a simple model output by the compiler incorporating the your code that executes correctly on the hardware instruction set architecture (ISA) simulator. This model incorporates various number format representations for reduced precision floating point.QualificationsMinimum:

  • Bachelor’s degree in computer science or equivalent 3 years towards an Engineering degree with emphasis on computing and mathematics coursework.
  • Proficiency with C++ object-oriented programming is essential.
  • Understanding of fixed point and floating-point number representations, floating point arithmetic, reduced precision floating point representations and sparse matrix storage representations and the methods used to convert between them.
  • Some experience in applied computer programming (e.g. prior internship).
  • Understanding of basic compiler concepts and methods used in creating compilers (ideally via a compiler course).
  • Data structures and algorithms for manipulating directed acyclic graphs.
Desired:
  • Familiarity of sparse matrix storage representations.
  • Hands on experience with CNN, RNN, Transformer neural network architectures
  • Experience with programming GPUs and specialized HW accelerator systems for deep neural networks.
  • Passionate about learning new compiler development methodologies like MLIR.
  • Enthusiastic about learning new concepts from compiler experts in the US and a willingness to defeat the time zone barriers to facilitate collaboration.
Quick Apply
  • Terms & Conditions
  • New Privacy
  • Privacy Center
  • Accessibility
For Job Seekers
  • Browse Jobs
  • Advanced Job Search
  • Emplois Quebec
For Employers
  • Post a Job
  • SimplyHired OnDemand
Stay Connected

APPLY NOW

© 2021 HireJobsCanada All Rights Reserved