Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I make the bloody CI work here?! #667

Open
SpenceKonde opened this issue Feb 11, 2022 · 13 comments
Open

How do I make the bloody CI work here?! #667

SpenceKonde opened this issue Feb 11, 2022 · 13 comments
Labels
Blocked Development work cannot proceed for reasons described in issue comments. Critical Serious issue requiring priority fix SUPERCRITICAL This issue is an extistential threat to the future of the core. Rewards are available for solution
Milestone

Comments

@SpenceKonde
Copy link
Owner

Syntax error on line 87? What the hell do you mean syntax error on line 87? What's wrong with it? How do I fix it?

I have no idea how to do this and am just cribbing from what @per1234 did for megaTinyCore

Does anyone know how to do github actions and can help me out with https://github.com/SpenceKonde/ATTinyCore/blob/v2.0.0-dev/.github/workflows/compile-examples.yml

I just have no idea where to even begin since I have no idea what I'm doing here.

Anyway, marking this as critical because it blocks 2.0.0 release and indeed I'm at the point in development where this is blocking essentially all further work on 2.0.0.

@SpenceKonde SpenceKonde added Blocked Development work cannot proceed for reasons described in issue comments. Critical Serious issue requiring priority fix labels Feb 11, 2022
@apws
Copy link

apws commented Feb 11, 2022

hi, not sure, but I tried to paste the yaml into this validator, having this reported:
https://jsonformatter.org/yaml-validator

Error : bad indentation of a sequence entry at line 88, column 12:
               device: attinyx41,chip=841
               ^
Line : undefined  undefined

@apws
Copy link

apws commented Feb 11, 2022

this looks valid - I dont know about your oppinions about tabs, I tried to convert them all to spaces using vscode (command .. convert ...) but no success, at least editing in the validator helped somehow
compile-examples.yml.txt

@SpenceKonde
Copy link
Owner Author

Ugh, well I think it's closer now, but I wasn't able to make any real progress, and I don't understand the errors from the metaCI workflows or how o fix them.

@SpenceKonde
Copy link
Owner Author

There are huge numbers of errors being reported, even for workflows that run without issue.

@SpenceKonde
Copy link
Owner Author

SpenceKonde commented Mar 2, 2022

Well, I have no idea how to fix this as it stands

I'm increasingly doubting whether this action is appropriate for what I am using it for. It appears to be inefficient (it looks like it sets up and tears down the whole bloody environment way too often), and it has to be kind of misused to do what I need to do with it, since it's not designed to test a bunch of tools submenu options, which is what I need (covering all options on all processors with a single specially crafted sketch would do more to test the code than testing 50 examples with a few dozen combinations of chip and option., and I might be better off with a github action that launched one runner per board, did the same prep tht that action does. But after that it would diverge - i

Alongside the boards.txt, I could generate a series of .py files. which the test could load (they would just contain a dictionary variable; the keys would be the names of mens and he value would be a list of options - or possibly another dictionary, so I could add constraints to each optiom so it would be skipped if inappropriat. Maybe the list of sketches could even go there. I could also probably in that case rig up some way to get the size reports out - I see this report size deltas action firing, but as faras I can tell there's no way to view the reports, and I can't find any indication that the're ever made avaiolable except in pull requests., but 95+% of the time I want to know for commits, whereas a python script, could connect to a remote server, upload the test results. I'd have a folder on a dedicated server containing the reports as csv, each file would be like boardname_commit.csv that distilled down to pass/fail/skip for every example setcj. Then an. hourly cronjob could combine size reports for the different boards of a single commit to one file. And I could also upload a boardname_warn.txt and boardname.error.txt in the event that either of those were found. In fact I could test many things that I can't even in theory test now; the main one that comes mind being that a failure to compile is always considered a failed test.That's appropriate for examples - on shouldn't be shipping examples that don't work - but for testing that the core is working correctly, there is a tn of code that, if it compiles successfully, indicates a defect.

By which I mean:
If I explicitly specify that I want to use TCB1, while compiling for the 1604 which doesn't have that timer and the compilation doesn't fail..... that's wrong (what is it using for timing? Whatever the answer is, it's not correct!). If I try to digitalWrite() to a constant "pin" that is neither a pin, nor (for compatibility with some libraries) NOT_A_PIN, and it compiles, that's a bug - any behavior that could be generated cannot possibly be what the user asked for, because they asked for the impossible. Likewise if I if use a volatile variable for the pin number passed to digitalWriteFast, that shouldn't compile either. To the greatest extent possible, my cores are all designed to give compile errors if we can know at compile time that the user is asking us to do something that cannot be done. IMO, that is the only correct behavior if we know that the line will never be valid. Any other behavior is incorrect - The official cores seem to have the view, in my opinion an incredibly misguided one, that almost any old piece of bad code should compile, and then misbehave at runtime leaving the user to wonder where they went wrong - we have the tools to put up a neon sign pointing at exactly where the bug lies and what it is, and I try to maximize my use of them.

If I knew python and yaml equallly well, the new github action route would be more work - though I think the payoff would still be substantial and likely worth it to do the python script route. But I don't know them equally well, and python is a language I know far better (even though as I said, I don;t know it all that well!), which is far easier to learn more of, and which is far more versatile and useful. I have half a dozen other projects I want to do in python, So I have an interest in having as much logic there, and as little in the yaml as possible.

Other reasons to get better at python:
Imagine a little SFPC, or maybe even a pi, with a giant USB hub plugged into it (or maybe not even so giant. I've got a design for an FT4232 quad UART board just about done which (unlike the CJMCU one from aliexpress) has a sane pin mapping), a serial adapter plugged into every port with boards dangling from it. And each board would have one (optiboot boards) or two (others - serialupdi uploads for modern AVRs, Arduino as ISP for everything else) dedicated to it.... cronjob could kick off every night and run tests using ACTUAL HARDWARE! And I want an avrdude-free stk500 uploader for sure. I hate being shackled to that monstrosity) and a program to fix up assembly listings by marking the destinations of jumps, calls and branches, remove comments tat give an offset relative to a symbol in the data space (ex, pointing out that a jump to something located at 0x550 on a DA-series jumping to EEPROM_SIZE +0x38, which is utterly useless and just craps up the listing, since one is about the code space and isn't even an address), convert vectors back into their names; all of that can be done with regexe - and count up the number of times each instruction is used in order to plot a histogram (I am extremely curious about what patterns would show up in this - which ones are common everywhere, which ones are common in some but not all sketches, which ones might as well not exist, and so on.

And , of course, something that would take input from my planned board testers. if the connections pass, upload the internal oscillator calibration (if it's a modern tiny, or a classic one once I write that) sketch, log it's output (which on modern AVRs includes the entire content of the SIGROW, plus the REVID, and measured number of CPU clock cycles per millisecond each system clock speed. (the longterm goal of the data being to see how much data is needed, combined with observed patterns, to identify the curve of the F vs calibration regiser on a modern AVR; the line of best fit that excel generates is a near perfect fit. If we could predict the speed from, say, JUST the two cal values for the two frequencies, that would let us tune to different speeds without a tuning sketch, which would be a damned spiffy trick for sure - you could just take a virgin tinyAVR 0/1/2-series, and say you wanted it to run at 24 MHz, you could select 24 MHz internal and not worry about having to tune it. I can already tune pretty damned well without an external timebase, but the sketch isn't somethingou'd want to run every time yo started the core; though a self-tune sketch that would then tell the bootloader to delete it once it had tuned the chiop is totally possible on ), and then erase that and upload blink (the userrow containing the calbytes would stay intact)

The fact that there are all these things that I could do with python, and for which yaml isn't relevant for is a strong argument for the pythony route, since it's the one that involves learning a more useful skill.

@SpenceKonde
Copy link
Owner Author

I'm going to bump this one. 2.0.0 will never be released of thos issue is solved,. it is way outside my wheel house, and and I have a to-do list that has those around me begging me to relax.

@SpenceKonde SpenceKonde added the SUPERCRITICAL This issue is an extistential threat to the future of the core. Rewards are available for solution label Aug 15, 2022
@SpenceKonde SpenceKonde added this to the 2.0.0 release milestone Aug 15, 2022
@jvasileff
Copy link

There are several problems with compile-examples.yml, some of which need to be solved to find the next problem, and so on.

  • The available-flash-*-plus-true-sketch-paths include non-existent sketches, causing quick failure
  • matrix.available-flash-kB isn't usually available in the actions/checkout@v2 action. The matrix setup definitely needs work
  • The fqbn doesn't seem to work
  • Even when the above don't get in the way, there are errors compiling the scripts

On the fqbn issue, what is the correct format? Based on what megaTinyCore does, I would expect ATTinyCore:avr:attinyx7,chip=167 to be valid, but I instead get:

[168](https://github.com/jvasileff/ATTinyCore/actions/runs/3083130505/jobs/4983682959#step:3:173)
  Error during build: Error resolving FQBN: board ATTinyCore:avr:attinyx7,chip=167 not found

This error does not occur when trying the fqbn ATTinyCore:avr:attinyx7.

@per1234
Copy link
Contributor

per1234 commented Sep 19, 2022

On the fqbn issue, what is the correct format?

The format of the FQBN is like this:

<vendor ID>:<architecture>:<board ID>[:<menu ID>=<option ID>[,<menu ID>=<option ID>]...]

I would expect ATTinyCore:avr:attinyx7,chip=167 to be valid

No. The separator between <board ID> and <menu ID> must be a colon (:), not a comma (,).

@jvasileff
Copy link

Ah, thanks @per1234! With that I should be able to mostly fix the script.

@jvasileff
Copy link

jvasileff commented Sep 20, 2022

I made some changes to fix the matrix & FQBNs. For now, to speed up testing, I also disabled everything except the attiny85 & 1634.

You can see the changes here: REMOVED

And the workflow run here: REMOVED

Currently, all compilations fail with the error below (but at least compilation is attempted!)

2022-09-20T16:04:52.7449029Z ##[group]Compiling sketch: avr/libraries/EEPROM/examples/eeprom_crc
2022-09-20T16:04:52.7449896Z In file included from /home/runner/work/ATTinyCore/ATTinyCore/avr/libraries/EEPROM/examples/eeprom_crc/eeprom_crc.ino:10:0:
2022-09-20T16:04:52.7450774Z /home/runner/.arduino15/packages/ATTinyCore/hardware/avr/1.5.2/cores/tiny/Arduino.h:230:10: fatal error: pins_arduino.h: No such file or directory
2022-09-20T16:04:52.7451297Z  #include "pins_arduino.h"
2022-09-20T16:04:52.7451713Z           ^~~~~~~~~~~~~~~~
2022-09-20T16:04:52.7452441Z compilation terminated.

I haven't looked into this error at all - I'm hoping @SpenceKonde or anyone else can weigh in, as perhaps this is something easy to fix with better knowledge of the core.

EDIT 9/27: I didn't realize this was already being worked on (#717). I commented there rather than pursuing my patch.

@SpenceKonde
Copy link
Owner Author

Okay things are looking better here.

But we get scads of failures due to trying to use a crystal on parts that don't have that option because they don't support a crystal. I don't understand the yaml format well enough to figure out how to add something though that would skip the test with an external crystal on some parts - I mean I can add a parameter to the matrix, entries, but how to use that to control whether we do an external crystal run I don't understand. Not all parts support an external crystal (48, 88, 828, and 43u all do not support crystals)

@SpenceKonde
Copy link
Owner Author

Okay I may have figured it out - lets see how this runs.
Parts without a crystal shouldn't get crystal test run on them, and added extclk, pll, and a "specialclock" option, to specify some unique clock, and now the crystal clock is specified too, so later I can go through and distribute various speeds that are valid options amongst the parts to catch cases where certain speeds conditionally compile code that is not valid.

I tried to make sure it's catching all the ugly wierd ones:

  • 16 MHz internal on the 841/441 (they will usually hit 16 if OSCCAL is set to the right value (typically in the 240-255 range).
  • 16 MHz and 16.5 MHz for PLL parts except for the tiny26 (it only gets the PLL at 16 tested)
  • 8 MHz crystal on micronucleus t167 where it's prescaled because the prescaling is configured at runtime meaning a different code path is followed.
  • 8 MHz from PLL on micronucleus t85/861 - by resetting tuning and then prescaling it, we are able to offer users 1, 2, 4, 8, and 16 MHz pll speeds on micronucleus even when the bootloader is leaving it set for the tuned 16.5 speed. (a different codepath from above, because here we're also checking the sigrow to get the original OSCCAL and setting that, then prescaling.
  • 8 MHz ext clock prescaled from 16 MHz for the MHET tiny88.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Blocked Development work cannot proceed for reasons described in issue comments. Critical Serious issue requiring priority fix SUPERCRITICAL This issue is an extistential threat to the future of the core. Rewards are available for solution
Projects
None yet
Development

No branches or pull requests

4 participants