Microforge Agentic Sourcer

Company Name: Neuralace (Sabi) Locations: USA, Europe, China, India Seniority: All levels. No professors Neuralace is looking for people with experience in Post Training. Any researcher who's done top tier work in any of the research directions mentioned below is of interest Our post training of Qwen will be in 2 broad directions: Consumer and Knowledge work **Consumer Post training** Most of the work here will be on designing the personality of the LLM and working with non verifiable rewards. Better conversations → understanding of intents like emotions and implicit meaning of what is being said, eg hesitation, sarcasm and humor, personality adherence Better creativity → roleplay, creative writing Better understanding of topical things and western zeitgeist like memes, TV shows, movies etc. **Knowledge Post Training:** Standard RLVR on tool and computer use. PDF, PPTs, Docs wrangling, code gen, agentic work as much as possible at this size. **Tool Call:** The highest leverage will come from tool call skills. The model is meant to be a workhorse that punches above its weight for its parameter size. It does not need frontier intelligence. Can we teach the model to figure out when to call a bigger model API as a tool? Letting our model act as the front end with occasional help from a more intelligent model. The core insight is that most knowledge work use cases can be solved by a well trained 30B grade model and does not need frontier models. Evals and Reward models: Evals for both VR use cases in knowledge work and non-VR use cases. Self play data gen: We need to generate conversation data with broad user personas. Efficient inference and rollout ranking and filtering to create large SFT datasets will be needed.

completed
370 qualified1 runMay 8, 12:16 PM

About Axiom Axiom is building the translational intelligence layer for drug discovery: AI systems that help scientists predict human toxicity earlier, more accurately, and more mechanistically than animal studies or legacy in vitro assays. Unexpected toxicity remains one of the largest reasons drug programs fail. Today, drug discovery teams still rely on animal studies, low-dimensional assays, and fragmented expert judgment to decide which molecules are safe enough to advance. We believe this can be dramatically improved. To predict toxicity, we need to understand what molecules actually do inside human cells and tissues. Mass spectrometry is one of the most important tools for that mission. It lets us observe the biochemical state of cells, identify metabolic liabilities, detect lipid and metabolite changes, understand pathway disruption, and eventually connect chemical structure to human-relevant mechanisms of toxicity. We are looking for a computational scientist with deep mass spectrometry expertise to help build this foundation. You will develop and scale computational workflows for LC-MS/MS data, extract biological signal from complex biochemical datasets, and help turn mass spec into a core modality for Axiom’s AI toxicity prediction platform. Charter Be a founding member of the team building the first accurate AI systems for drug toxicity prediction: systems that can help replace animal studies and legacy lab experiments with human-relevant models. What you will do You will own major parts of Axiom’s computational mass spectrometry stack. You will: Analyze large-scale biological mass spectrometry datasets, primarily LC-MS/MS, across metabolomics, lipidomics, proteomics, and reactive metabolite workflows. Build, improve, and scale computational pipelines for untargeted LC-MS/MS analysis using tools such as MZmine, OpenMS, MS-DIAL, GNPS, Skyline, or custom internal software. Develop workflows for peak detection, alignment, normalization, annotation, batch correction, QC, feature filtering, compound identification, and downstream biological interpretation. Turn raw mass spec data into model-ready representations that can be used by machine learning systems and mechanistic reasoning agents. Work with biology, chemistry, ML, engineering, and lab teams to design, debug, and improve high-throughput LC-MS/MS assays. Extract actionable biological insights from mass spec data, including pathway-level changes, metabolic signatures, lipid remodeling, protein abundance changes, and evidence for specific toxicity mechanisms. Help build datasets that connect chemical structure, dose, exposure, cellular phenotype, biochemical state, and human toxicity outcomes. Develop quality control systems for high-throughput mass spectrometry datasets, including instrument performance, sample quality, replicate concordance, batch effects, missingness, drift, and annotation confidence. Collaborate with ML researchers to build models that use mass spec features to improve toxicity prediction. Investigate where mass spec helps explain model errors, reveals missing biology, or identifies mechanisms not visible from imaging, transcriptomics, or standard biochemical assays. Design new strategies for expanding Axiom’s mass spec data generation based on model performance, biological coverage, and customer needs. Help make mass spectrometry data interpretable and useful to drug hunters, toxicologists, and Axiom’s internal AI agents. What we are looking for We are looking for someone who can combine mass spectrometry expertise, computational depth, and biological judgment. You might be a great fit if: You have built computational workflows for untargeted LC-MS/MS metabolomics. You have used mass spectrometry data to answer real biological questions, not just run pipelines. You understand the messy reality of mass spec data: missingness, batch effects, adducts, isotopes, retention time drift, annotation uncertainty, instrument artifacts, and biological confounders. You are comfortable moving from raw files to biological interpretation. You can reason about metabolism, pathway disruption, lipid biology, protein changes, and drug-induced cellular stress. You are excited by the idea of using mass spec data as training data for AI systems. You want to build scalable infrastructure, not just analyze one-off datasets. You care deeply about data quality, reproducibility, and scientific rigor. You can work closely with wet lab scientists to improve experimental design and debug assays. You want ownership over a critical scientific modality at an early company. You are motivated by the mission of replacing animal testing and preventing clinical toxicity failures. Technical skills we value We do not expect every candidate to have all of these, but we are excited by experience with: Python, Pandas, NumPy, SciPy, scikit-learn, Jupyter notebooks MZmine, OpenMS, MS-DIAL, XCMS, GNPS, Skyline, ProteoWizard, MaxQuant, DIA-NN, Spectronaut, or related tools LC-MS/MS data formats such as mzML, mzXML, RAW, mzTab, mzIdentML, mzQuantML, or vendor-specific formats Peak picking, chromatographic alignment, feature grouping, deconvolution, annotation, normalization, and batch correction Metabolite, lipid, and peptide identification workflows Spectral libraries, molecular networking, fragmentation interpretation, adduct/isotope handling, and confidence scoring Statistical modeling, dimensionality reduction, clustering, differential abundance analysis, and pathway enrichment Large-scale data processing, SQL, cloud computing, workflow orchestration, and reproducible analysis pipelines

completed
165 qualified1 runMay 7, 10:46 PM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. I want people who specifically have experience in generative CAD design.

completed
174 qualified1 runMay 6, 8:05 PM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. I want people who specifically have experience in generative CAD design.

completed
38 qualified1 runMay 5, 9:04 AM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. I want people who specifically have experience in generative CAD design.

completed
33 qualified1 runMay 4, 9:55 PM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. I want people who specifically have experience in generative CAD design.

completed
42 qualified2 runsMay 5, 9:01 AM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. • High GPA in a competitive undergrad program. • Experience working with cloud infrastructure and distributed training pipelines (but if you’re strong on the research side, this can be picked up quickly). • Objectively impressive achievements in any domain. Built something unusual? Won something hard? We want to hear about it. find people who have extensively done work on CAD generation. I only want them to be located in US.

completed
0 qualified1 runMay 1, 7:46 AM

Spectral is building generative foundation models that turn user input into structured, editable 3D CAD. We shipped SGS-1, the first generative model for structured CAD. We’re now building SGS-2. This role is at the center of that work. We’re looking for someone who can push our models forward: designing architectures, developing training methodologies, building novel representations for 3D geometry, and running experiments that move the state of the art. You’ll work directly on the models themselves, not just the data that feeds them or the infra that serves them, but the core research and engineering that determines what our models can actually do. This is a role where the line between research and engineering is blurry by design. You’ll propose and test ideas, write the code to run them, analyze results, and ship what works into production. We’re a small team, so you’ll have real ownership over the direction of the research: not just execution on someone else’s roadmap. Whether you come from a PhD research track, an applied ML engineering background, or somewhere in between, what matters is that you can do the work. Responsibilities • Design, implement, and iterate on model architectures for generative 3D CAD. This includes working with novel representations of geometry, topology, and parametric structure that don’t have off-the-shelf solutions. • Develop and improve training methodologies, including supervised, self-supervised, and reinforcement learning, to push model quality, consistency, and generalization. • Design and run experiments rigorously. You’ll formulate hypotheses, build the infrastructure to test them, interpret results, and decide what to pursue further. • Collaborate closely with the geometry engineering team on data representation. The data and the model evolve together, and you’ll be in that conversation from both sides. • Develop evaluation frameworks and benchmarks that meaningfully measure model quality against real-world CAD standards, not just standard ML metrics. • Stay current with relevant research (3D generation, geometric deep learning, diffusion models, autoregressive models, RL for generative models) and bring ideas back to the team. • Contribute to the team’s broader research direction. At our size, everyone has a voice in what we work on and why. • Write clean, production-quality code. Research code that works once isn’t enough. It needs to be reproducible, readable, and eventually shippable. Qualifications Required • Strong track record in machine learning research or applied ML engineering, demonstrated through publications, shipped models, or meaningful contributions to open-source ML projects. • Deep understanding of modern generative model architectures: transformers, diffusion models, autoregressive models, or similar. You should be able to read a new paper and implement it or identify why it won’t work for your problem. • Strong proficiency in Python and PyTorch (or equivalent). You write research code that other people can actually read and build on. • Experience designing and running experiments at scale: distributed training, hyperparameter tuning, ablation studies, and the discipline to interpret results honestly. • Solid mathematical foundations: linear algebra, probability, optimization, and enough geometry to reason about 3D representations. • Experience solving problems that stump others. Must love the process of solving hard problems. Big Pluses • Experience with 3D data, geometric deep learning, or spatial representations (point clouds, meshes, B-rep, NURBS, voxels, implicit surfaces, or similar). If you’ve trained a model on 3D data, you know how different it is from images or text. • Experience with reinforcement learning for generative models (RLHF, DPO, GRPO, or similar). • Experience with CAD, computational geometry, or engineering design tools. You understand what a feature tree is, why STEP files are painful, or what makes a B-rep valid. • PhD in Computer Science, Machine Learning, Robotics, Computational Geometry, or a related field. (A strong publication record or equivalent industry experience matters more than the degree itself.) • Experience with generative models for structured or sequential outputs (code generation, molecular design, procedural generation, or more generally, domains where the output has hard constraints and isn’t just “looks good”). • Background in embodied AI, self-driving, simulation, robotics, or other domains requiring spatial reasoning at scale. • High GPA in a competitive undergrad program. • Experience working with cloud infrastructure and distributed training pipelines (but if you’re strong on the research side, this can be picked up quickly). • Objectively impressive achievements in any domain. Built something unusual? Won something hard? We want to hear about it.

completed
37 qualified1 runApr 30, 6:15 AM

# Research Scientist - Synthetic Data ## **What we're looking for** Spectral is seeking a talented researcher who will help us build synthetic data pipelines to power our next generation of CAD foundation models. ## **Responsibilities** - Design and implement synthetic data pipelines to power our next models - Work with data team to ensure that synthetic data methods are adequately bootstrapped - Design and run experiments in an iterative, scientific way - Collaborate with a small, elite team of researchers and engineers across domains. ## **Qualifications** - 3+ years of experience in relevant engineering field. Bachelors-masters-PhD in Computer Science, Robotics, Engineering, Math, or a related technical field a plus. - Solid coding background, able to write code, run experiments, and analyze results end to end - Strong intuitive grasp of ML concepts and theory, demonstrated through background - High GPA in a competitive undergrad program is a big plus - Experience with generative CAD modeling or other 3D domains, embodied AI, or image/video/world modeling. - Objectively impressive achievements in any domain are a big plus ## **Benefits** Health insurance with 100% premium covered Free lunch, dinner, and snacks ## **How to Apply** Does this position sound like a good fit? Email us at [jobs@spectrallabs.ai](mailto:jobs@spectrallabs.ai). Include this role's title in your subject line (it'll help us to sort through the emails). Send along links that best showcase the relevant things you've built and done.

completed
0 qualified1 runApr 23, 7:14 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 2:03 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 2:03 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 2:03 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 2:02 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 1:53 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 1:52 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 1:51 PM

TPU Kernel Engineer San Francisco, CA | New York City, NY | Seattle, WA About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the Role As a TPU Kernel Engineer, you'll be responsible for identifying and addressing performance issues across many different ML systems, including research, training, and inference. A significant portion of this work will involve designing and optimizing kernels for the TPU. You will also provide feedback to researchers about how model changes impact performance. Strong candidates will have a track record of solving large-scale systems problems and low-level optimization. You may be a good fit if you: Have significant experience optimizing ML systems for TPUs, GPUs, or other accelerators Are results-oriented, with a bias towards flexibility and impact Pick up slack, even if it goes outside your job description Enjoy pair programming (we love to pair!) Want to learn more about machine learning research Care about the societal impacts of your work Strong candidates may also have experience with: High performance, large-scale ML systems Designing and implementing kernels for TPUs or other ML accelerators Understanding accelerators at a deep level, e.g. a background in computer architecture ML framework internals Language modeling with transformers Representative projects: Implement low-latency, high-throughput sampling for large language models Adapt existing models for low-precision inference Build quantitative models of system performance Design and implement custom collective communication algorithms Debug kernel performance at the assembly level The annual compensation range for this role is listed below. For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role. Annual Salary: $280,000 - $850,000 USD Logistics Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices. Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this. We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team. Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings. How we're different We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences. Come work with us! Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process

cancelled
0 qualified1 runApr 22, 1:46 PM

The Role You will lead the design, execution, and analysis of rheological experiments to build a practical, decision-oriented understanding of how our polymer materials behave in real processing environments. This is not academic exploration, this role is focused on answering how and why our materials perform the way they do, and using that understanding to guide formulation, processing, and product decisions. This role sits within a collaborative materials development team and reports to our Associate Director of Product Development. While highly autonomous in day-to-day work, you will partner closely with chemists, process engineers, and materials scientists to connect formulation, processing, and performance. Success in this role means not just generating high-quality data, but using it to define process windows, guide formulation decisions, and ensure our materials perform reliably in real-world applications. What You’ll Do Design and execute rheological experiments to characterize polymer systems under a variety of extrusion based methods. Apply standard rheological techniques while developing and adapting methods for new materials and equipment Establish processing > structure >property relationships that inform product performance Define operating windows and process boundaries to ensure materials meet target specifications Partner with chemists to answer key formulation questions (e.g., catalyst loading, mixing behavior, residence time effects) Build structured, reproducible experimental workflows with a strong emphasis on rigor and data quality Synthesize and communicate results clearly to enable fast, confident decision-making across the team Success in This Role (First 6 Months) Establish rheological guidelines for a core additive manufacturing product Define process windows and constraints required to achieve consistent performance Provide clear recommendations on formulation and processing variables (e.g., catalyst incorporation, mixing conditions) Build a foundation of reliable, interpretable data that informs both current products and development of future products Who You Are Strong foundation in rheology applied to polymer or soft material systems Hands-on experimentalist who can design as well as execute studies Comfortable operating with autonomy and bringing structure to ambiguous problems Product-oriented thinker who connects material behavior to real-world performance Clear and concise communicator who can translate complex data into actionable insights Pragmatic and outcome driven. You prioritize solving the right problems over exploring everything Qualifications Required: MS or PhD in Polymer Science, Materials Science, Chemical Engineering, or a related field Meaningful experience with thermomechanical analysis methods, e.g. DMA, DSC, TGA, TMA Hands-on experience with rheological characterization of polymeric materials (e.g., melts, resins, gels, or similar systems) Experience designing experiments and interpreting results in a processing or application context Preferred: Industry experience in additive manufacturing, polymers, coatings, or advanced materials Experience working with or supporting filament-based AM processes Experience working with singly or twin screw extrusion based processes Experience developing or adapting rheological methods for new materials or equipment

completed
53 qualified1 runApr 19, 3:14 AM

About Etched Etched is building the world’s first AI inference system purpose-built for transformers - delivering over 10x higher performance and dramatically lower cost and latency than a B200. With Etched ASICs, you can build products that would be impossible with GPUs, like real-time video generation models and extremely deep & parallel chain-of-thought reasoning agents. Backed by hundreds of millions from top-tier investors and staffed by leading engineers, Etched is redefining the infrastructure layer for the fastest growing industry in history. Job Summary Etched is looking for exceptional Physical Design engineers to join our team. In this role, you will own block-level implementation and verification, drive timing closure and PPA optimization, supervise 3rd party design work, and help improve our design flows and iteration speed. Key Responsibilities Deeply understand what is involved in physical design Running PD flows to close blocks, support ASIC infrastructure, automate PD flows, improve CAD infrastructure Collaborate with RTL Designers and provide Physical Design feedback to improve PPA Drive dashboards that show the convergence of projects related to PD Optimize tool flows, working with EDA vendors to incorporate the latest features Accountable for block level closure Supervise the outsourcing of physical design to a 3rd party service You may be a good fit if you have 5-10+ years of previous experience with PD Tools, flow, and design methodology from RTL synthesis to GDSII sign-off Experience with back-end design and timing closure on advanced process nodes (5nm and below) Experience with Cadence (Innovus, Genus) or Synopsys (ICC2, Fusion Compiler) automated RTL-to-GDSII flows Experience with sign-off tools (PrimeTime, Tempus, Voltus, etc.) Experience with UPF-based low power design methodology, power verification,synthesis, scan insertion/ATPG, formal verification, floorplanning, placement, CTS, routing, IR drop, and EM/antenna analysis Deeply creative and able to think from first princip

failed
0 qualified1 runMar 24, 5:04 PM