-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I make the bloody CI work here?! #667
Comments
hi, not sure, but I tried to paste the yaml into this validator, having this reported:
|
this looks valid - I dont know about your oppinions about tabs, I tried to convert them all to spaces using vscode (command .. convert ...) but no success, at least editing in the validator helped somehow |
Looks like you might need some CI for your CI @SpenceKonde 😉 |
Ugh, well I think it's closer now, but I wasn't able to make any real progress, and I don't understand the errors from the metaCI workflows or how o fix them. |
There are huge numbers of errors being reported, even for workflows that run without issue. |
Well, I have no idea how to fix this as it stands I'm increasingly doubting whether this action is appropriate for what I am using it for. It appears to be inefficient (it looks like it sets up and tears down the whole bloody environment way too often), and it has to be kind of misused to do what I need to do with it, since it's not designed to test a bunch of tools submenu options, which is what I need (covering all options on all processors with a single specially crafted sketch would do more to test the code than testing 50 examples with a few dozen combinations of chip and option., and I might be better off with a github action that launched one runner per board, did the same prep tht that action does. But after that it would diverge - i Alongside the boards.txt, I could generate a series of .py files. which the test could load (they would just contain a dictionary variable; the keys would be the names of mens and he value would be a list of options - or possibly another dictionary, so I could add constraints to each optiom so it would be skipped if inappropriat. Maybe the list of sketches could even go there. I could also probably in that case rig up some way to get the size reports out - I see this report size deltas action firing, but as faras I can tell there's no way to view the reports, and I can't find any indication that the're ever made avaiolable except in pull requests., but 95+% of the time I want to know for commits, whereas a python script, could connect to a remote server, upload the test results. I'd have a folder on a dedicated server containing the reports as csv, each file would be like boardname_commit.csv that distilled down to pass/fail/skip for every example setcj. Then an. hourly cronjob could combine size reports for the different boards of a single commit to one file. And I could also upload a boardname_warn.txt and boardname.error.txt in the event that either of those were found. In fact I could test many things that I can't even in theory test now; the main one that comes mind being that a failure to compile is always considered a failed test.That's appropriate for examples - on shouldn't be shipping examples that don't work - but for testing that the core is working correctly, there is a tn of code that, if it compiles successfully, indicates a defect. By which I mean: If I knew python and yaml equallly well, the new github action route would be more work - though I think the payoff would still be substantial and likely worth it to do the python script route. But I don't know them equally well, and python is a language I know far better (even though as I said, I don;t know it all that well!), which is far easier to learn more of, and which is far more versatile and useful. I have half a dozen other projects I want to do in python, So I have an interest in having as much logic there, and as little in the yaml as possible. Other reasons to get better at python: And , of course, something that would take input from my planned board testers. if the connections pass, upload the internal oscillator calibration (if it's a modern tiny, or a classic one once I write that) sketch, log it's output (which on modern AVRs includes the entire content of the SIGROW, plus the REVID, and measured number of CPU clock cycles per millisecond each system clock speed. (the longterm goal of the data being to see how much data is needed, combined with observed patterns, to identify the curve of the F vs calibration regiser on a modern AVR; the line of best fit that excel generates is a near perfect fit. If we could predict the speed from, say, JUST the two cal values for the two frequencies, that would let us tune to different speeds without a tuning sketch, which would be a damned spiffy trick for sure - you could just take a virgin tinyAVR 0/1/2-series, and say you wanted it to run at 24 MHz, you could select 24 MHz internal and not worry about having to tune it. I can already tune pretty damned well without an external timebase, but the sketch isn't somethingou'd want to run every time yo started the core; though a self-tune sketch that would then tell the bootloader to delete it once it had tuned the chiop is totally possible on ), and then erase that and upload blink (the userrow containing the calbytes would stay intact) The fact that there are all these things that I could do with python, and for which yaml isn't relevant for is a strong argument for the pythony route, since it's the one that involves learning a more useful skill. |
I'm going to bump this one. 2.0.0 will never be released of thos issue is solved,. it is way outside my wheel house, and and I have a to-do list that has those around me begging me to relax. |
There are several problems with
On the
This error does not occur when trying the fqbn |
The format of the FQBN is like this:
No. The separator between |
Ah, thanks @per1234! With that I should be able to mostly fix the script. |
I made some changes to fix the matrix & FQBNs. For now, to speed up testing, I also disabled everything except the attiny85 & 1634.
EDIT 9/27: I didn't realize this was already being worked on (#717). I commented there rather than pursuing my patch. |
Okay things are looking better here. But we get scads of failures due to trying to use a crystal on parts that don't have that option because they don't support a crystal. I don't understand the yaml format well enough to figure out how to add something though that would skip the test with an external crystal on some parts - I mean I can add a parameter to the matrix, entries, but how to use that to control whether we do an external crystal run I don't understand. Not all parts support an external crystal (48, 88, 828, and 43u all do not support crystals) |
Okay I may have figured it out - lets see how this runs. I tried to make sure it's catching all the ugly wierd ones:
|
Syntax error on line 87? What the hell do you mean syntax error on line 87? What's wrong with it? How do I fix it?
I have no idea how to do this and am just cribbing from what @per1234 did for megaTinyCore
Does anyone know how to do github actions and can help me out with https://github.com/SpenceKonde/ATTinyCore/blob/v2.0.0-dev/.github/workflows/compile-examples.yml
I just have no idea where to even begin since I have no idea what I'm doing here.
Anyway, marking this as critical because it blocks 2.0.0 release and indeed I'm at the point in development where this is blocking essentially all further work on 2.0.0.
The text was updated successfully, but these errors were encountered: