Skip to content
@nanotoolworks

Nanotoolworks

🔬Contract professional services in nanoscale microscopy ... *no job too small*🔬

It's all about developing the skills necessary to work at the bottom which means that Nanotoolworks primarily works in the virtual in order to undertand and optimize the work at the nanoscale level. In order to optimize work at the nanoscale level, it is necessary to use a broader array of tools ... specifically, we need to re-learn how we learn ... this means doing a better job of knowledge engineering as we have seen in the developments in the last decade. Current compute technology is no longer able to double the processing speed with every new silicon generation at a constant power, due to the slow-down of Moore’s law and Dennard scaling, today’s AI processors and accelerators need to make more efficient use of the available power ... ultimately, working at the bottom is about solving fundamental problems more elegantly, for example, consider the efforts of Intelligence Processing Units to explore architectures to achieve more efficient parallel implementation of general matrix multiplications (GEMMs). Of course, the current state of IPU architecture will not be the final answer, but GEMMs are such a fundamental task of AI that it illustrates how the from the bottom approach must be the fundamental principle of nanotoolworks.

To work more elegantly at the molecular level, we may need to think more, to do more in the virtual realm and to be more broadly aware of developments elsewhere ... to be better at reflexive, situationally-aware learning [as if our methods attempt to imitate AlphaZero in learning faster as we do a better job of deep learning from the large-scale modern competitive landscape.

Nanotoolworks is open source ... which means that everything we do here is about dogfooding the skill to accomplish realtime imaging, machine vision and measurement of properties at the nanoscale level. The ultimate objective is more precise automated control of process variables; the immediate objective is more informed process development intelligence or the ability to furnish a picture of what is happening in realtime as atoms are being deposited or etched.

Our objective is to generally work toward training and skills development in:

  • nanofabrication, particularly atomic layer chemistry but also enabling technologies like NA EUV.
  • hacking where optimal; coding where optimal -- this might mean Jupyter, Python and shell scripting and it might mean coding in C++ and working with various kinds of compilers to optimize multi-instance GPU code,
  • familiarity with the wide array of competitive cloud computing options for a varied of computing tasks ... Jupyter notebooks to HPC simulations ... Colab, GCP, Lambda, Paperspace, AWS, Azure and various ways of accessing GPU to optimize a compute budget
  • something approaching professional-level experience in K8s and Linux administration.
  • understanding of multi-instance GPUs, CUDA programming, math libraries and
  • excellent written communicatin and oral presentation skills.

We find that classrooms do not really help us; it is is necessary to learn things the HARD way ... to actually DO them and to do that in as cost-efficient manner as possible.

We encourage people to fork our repositories and develop their own writing / presentation ability as they curate, develop and maintain open-source repositories that pertain to the development of skills. Annotification of lists is fundamental building block for achieving this process of skills development ... we develop these lists for other people and our FUTURE selves.

Human Machine-Augmented Learning Process in List Curation

What we do is intentionally pretty simple ... but it can be a bit tedious and necessarily hands-on and hard because *we strongly believe in teaching machines to do most of the tedious parts of learning, searching and recommending for us.

Teaching machines to learn makes us smarter about how we learn. A human who thinks differently, perhaps incoherently, but necessarily intuitively is necessary to refine the artifically intelligent systems and augment the development of better machine learning methods.

Most importantly, it's about the unpredictability of competition coming from others who think different. It is not exactly necessary for one to worry about what one is attempting to invent -- it's necessary to have one's head in the game about what others are inventing. Collaborative competition is necessary for the extension of knowledge. Humility matters.

We might stumble across inventions as we constantly adapt ... but the objective is to more practically use better training of our semantic machine learning models. List curation looks simple. This is really about taking research out of the lab before it's fully cooked while the research is still developing.

We are not born smart, we eventually might get smart ... but that's only possible, if we GET STARTED

The deceptively simple start begans with a general idea for a list ... like scribbling a sketch on an erasable board, we will probably start over a few times ... we get started knowing that we will almost certainly throw a few levels of ideas away -- the ideas for lists get better over time and we will of course refine the scope of the list, but at first -- it's mostly important to just iterate on iterations ... so the start is to just get started with ANY idea.

Getting started means coherently collecting a few hundred papers in order to get a feel for the lay of the land ... reading entries in Wikipedia does not really constitute getting started ... sure, that can be part of what is necessary to actually understand what one is reading, but it's not a substitute for a LOT of reading and iterative refinement of the scope of collection. Speedreading abstracts and then reading a couple hundred papers might sound rather labor intensive ... it's maybe somewhat discouraging at first, because learning jargon, especially from #DeepTech niche can be pretty tedious, especially in the realm of MarketingSpeak or psuedo-science, but such reading is necessarily humbling and frustrating, ie if you actually understand what you're reading, you are probably doing it wrong OR reading well within your comfort level. It is necessary to be humbled and confused at first --- one must read BROADLY and jump ahead to infer more than what one knows

As an example, we might start with something like a recent paper on geometric fitting using neural estimation and then we likely explore connections to a depth five or six levels ... starting with the next level of connection, reaching into papers such as one on Invertible Dense Networks (i-DenseNets)) ... reading critically, with extreme skepticism at first -- after exploring for a while, something like 250 or 500 abstracts and papers should give us general feel for the lay of the land.

Popular repositories

  1. nanotoolworks.github.io nanotoolworks.github.io Public

    No Job Too Small

    SCSS 1

  2. .github .github Public

    #NoJobTooSmall Contract / ForHire job shop for laboratory tool building / developmental work in the nanoscale realm as well as nanofabrication mfg engineering

  3. clara-holoscan clara-holoscan Public

    Forked from nvidia-holoscan/.github

    The AI computing platform for medical devices

  4. warp warp Public

    Forked from NVIDIA/warp

    A Python framework for high performance GPU simulation and graphics

    Python

Repositories

Showing 4 of 4 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…