Replies: 6 comments 5 replies
-
Another way of describing this question is: in the current implementation, the mediator is trusted for consistency. What extra verification is necessary to remove this trust assumption? |
Beta Was this translation helpful? Give feedback.
-
I was just reaching the conclusion that Suggestion A (which is pretty close to what we're already doing) is the best way to proceed.
We have a system that uses a mediator that holds no secret data and that is responsible for taking certain public actions which all parties can view and see is being done correctly. The mediator is effectively nothing more than a conduit for sharing public data.
We are considering the option of allowing the mediator to be replaced by peer-to-peer interactions between the guardians. But if we build an elaborate peer-to-peer agreement protocol, then the very next step will be for some election administrator to take the resulting public key and put it into all of the election devices. So there's little point in maintaining the fiction of a completely distributed and decentralized process. As long as some election administrator is going to be responsible for taking the agreed upon joint public key and performing some action, there is the possibility of one or more guardians saying, "Wait, that's not the joint public key we agreed upon."
Given the above, we might as well use a mediator which collects all of the guardian data and publishes it as part of the election record. If something there doesn't match expectations, a guardian can say, "Wait, that's not the information I gave you to publish (or what you published is inconsistent with what you gave me during the key ceremony)." It isn't substantively different - there really aren't any additional assumptions. The mediator isn't trusted to maintain any secret data or even to faithfully publish what it is supposed to publish. The assumption has to be that a guardian has some means of going to the public and asserting, "The mediator published my info incorrectly. I told it to publish X and it instead published Y." At that point, we needn't really figure out who is right, the mediator can just say, "Fine, we'll use Y instead."
This public vetting needn't be done before the key ceremony concludes. But if the published public key for a guardian is changed, the key ceremony may need to be unwound and repeated with any new data. Ultimately, the functioning of the mediator as a conduit might be entirely replaced by one or more guardians giving their public data to the media. But there doesn't seem to be a substantive distinction between this and a guardian going to the media and asserting, "The published record claims that my public key is X, but it's actually Y."
This may feel like something of a punt, but I don't think that we should spend a lot of time and effort only partially solving a problem when there ultimately needs to be some public adjudication of disagreements outside of ElectionGuard in any case.
Josh
From: Vanessa Teague ***@***.***>
Sent: Wednesday, March 10, 2021 5:07 PM
To: microsoft/electionguard ***@***.***>
Cc: Josh Benaloh ***@***.***>; Mention ***@***.***>
Subject: [microsoft/electionguard] Key-generation verification in the spec (#79)
I'd like to start a discussion about tightening up the process of verifying that key generation has been conducted properly. At the moment the spec assumes broadcast, but this may not be feasible for all practical scenarios. Furthermore, we need to be a bit clearer about what happens when things seem to have gone wrong. For example, there's a crucial step in which the guardians publish commitments $K__{ij}$, and at the moment this is implemented by point-to-point communication - we need more detail about how to detect and respond to guardians who send inconsistent commitments.
I think we all agree that we do not need (or want) a heavyweight consensus protocol - it should suffice to detect a malfunction and correct it manually. I think (though I am happy to be corrected) that this corresponds to what the MPC literature calls the covert adversary model.
I think we possibly want a key-gen-verification spec that describes exactly how the guardians verify that the process has been completed correctly, and what to do if they detect a problem.
Question: Is this something that the guardians verify among themselves, and then sign, or something that third parties verify on the election transcript, or both?
So the question is, under what assumptions, and given what clarifications to the verification spec, can we be confident that a passed verification implies a properly-generated key?
Here's my best-effort sketch:
Suggestion A
Suppose we can assume a reliable broadcast at the end, i.e. a final transcript that comes with some evidence that everyone is seeing the same one. (Exactly how this is achieved is a separate question.)
Then we would run the key generation protocol exactly as it is now specified, and each guardian would verify the ZKPs as now specified. Then, in the final round, each guardian would also check that the transcript matched their view, and in particular that the values broadcast on it matched the ones they had received through private channels - if it did, they would endorse the resulting key in some way; if it didn't they would object.
Question 1: does it suffice to check only the final key? Ans: probably not, but not certain.
Question 2: does it suffice to check the commitments K_{ij}? Ans: probably yes, but not certain.
Suggestion B
An alternative proposal, which both @benaloh<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fbenaloh&data=04%7C01%7Cbenaloh%40microsoft.com%7C9c34ad3439a043da569508d8e429e9a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510215954264230%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=WW3ZLiX%2FLo09pkOh4l7v%2FRNRK9ngloZCcmv9eMh3oNk%3D&reserved=0> and @eleanor-em<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Feleanor-em&data=04%7C01%7Cbenaloh%40microsoft.com%7C9c34ad3439a043da569508d8e429e9a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510215954274226%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=qMnn1loko1IrDfHaZUJjaOzgGucroHQJV8VRQFSz%2B6I%3D&reserved=0> have suggested, is to get each guardian to share their view of others' keys, something like this:
Before communication, guardians generate primary public key, associated polynomials, auxiliary public key.
Round 1
Guardian broadcast/share their public keys and key share commitments.
Round 2
Each guardian uses each others' auxiliary key to share private key shares and their own views of each other's keys.
Round 3
Each guardian "broadcast" confirmations or objections.
Round 4 (If objections are lodged)
Each challenged guardian broadcasts it key shares sent to challenge.
I think this could also work, and could be good for a scenario in which we don't assume a final broadcast step.
Both ideas need more careful consideration, and an understanding of how their assumptions can be met and what fraction of malicious participants they defend against.
-
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2Felectionguard%2Fdiscussions%2F79&data=04%7C01%7Cbenaloh%40microsoft.com%7C9c34ad3439a043da569508d8e429e9a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510215954284221%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=iTc7XPxKPFgvUcwEzVNyeJZrSIsxHZWgmfH0mEGbF10%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA725UDQSM7ID6V33UPSEULTDAJRTANCNFSM4Y7HKPHQ&data=04%7C01%7Cbenaloh%40microsoft.com%7C9c34ad3439a043da569508d8e429e9a2%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510215954294217%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=JWDc6J4o6fomNf8ojTgOIhyzXyYv%2BULvE6fMdZHVmhc%3D&reserved=0>.
|
Beta Was this translation helpful? Give feedback.
-
The KeyCeremonyMediator contains a list of the Guardian objects. A Guardian object holds secret keys in "private" fields that start with an underscore. However, Im guessing that its possible for an attacker who controls the KeyCeremonyMediator to find out what those values are. Please correct me if Im wrong. So that means currently that we have to trust the KeyCeremonyMediator code, and probably any code that runs in the same python program containing Guardian objects. An alternative is to keep the Guardian objects in separate processes, maybe on separate hardware. That's easier to explain to Election officials and people acting as Trustees, if their secret codes never leave their possession. With a published remote API of some sort, a Trustee in principle could build and run their own stack to participate in the KeyCeremony. (Maybe not true for decryption, due to efficiency concerns?) Not sure if im being more paranoid than needed. |
Beta Was this translation helpful? Give feedback.
-
The mediator should not have any access to private key material in an election. Such access might be useful for testing, but it should not be present in deployed code.
Josh
From: John Caron ***@***.***>
Sent: Thursday, March 11, 2021 9:30 AM
To: microsoft/electionguard ***@***.***>
Cc: Josh Benaloh ***@***.***>; Mention ***@***.***>
Subject: Re: [microsoft/electionguard] Key-generation verification in the spec (#79)
We have a system that uses a mediator that holds no secret data.
The KeyCeremonyMediator contains a list of the Guardian objects. A Guardian object holds secret keys in "private" fields that start with an underscore. However, Im guessing that its possible for an attacker who controls the KeyCeremonyMediator to find out what those values are. Please correct me if Im wrong.
So that means currently that we have to trust the KeyCeremonyMediator code, and probably any code that runs in the same python program containing Guardian objects.
An alternative is to keep the Guardian objects in separate processes, maybe on separate hardware. That's easier to explain to Election officials and people acting as Trustees, if their secret codes never leave their possession. With a published remote API of some sort, a Trustee in principle could build and run their own stack to participate in the KeyCeremony. (Maybe not true for decryption, due to efficiency concerns?)
Not sure if im being more paranoid than needed.
-
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fmicrosoft%2Felectionguard%2Fdiscussions%2F79%23discussioncomment-470531&data=04%7C01%7Cbenaloh%40microsoft.com%7C756c32cdb34d459f6a8108d8e4b34ecb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510806058311183%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=fzoGiFbK%2BOVkxPb5pwl2eR0PtyPHrvg4PBCgg7SkIDk%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAA725UAK7XERV2ABCKM6VCDTDDVYXANCNFSM4Y7HKPHQ&data=04%7C01%7Cbenaloh%40microsoft.com%7C756c32cdb34d459f6a8108d8e4b34ecb%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637510806058321179%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=2jzh5h4oqRYeFIvEo7ylkImjmKVI083a0GArNZ9Zjl8%3D&reserved=0>.
|
Beta Was this translation helpful? Give feedback.
-
The places where Im seeing that the KeyCeremonyMediator needs to ask the
Guardian for something that requires a secret:
1. generate_election_partial_key_backup() needs to pass its polynomial to
ElectionPolynomial.compute_polynomial_coordinate()
2. verify_election_partial_key_backup() needs to use its auxiliary private
key to decrypt the value.
3. generate_election_partial_key_challenge() needs to pass its polynomial
to ElectionPolynomial.compute_polynomial_coordinate()
The places where Im seeing that the DecryptionMediator needs to ask the
Guardian for something that requires a secret:
1. _get_compensated_shares_for_tally() eventually calls
Guardian.compensate_decrypt() which uses its auxiliary private key to
decrypt the value.
2. announce() eventually calls Guardian.partially_decrypt() which uses its
election (ElGamal) private key to decrypt the ElGamal.Ciphertext.
If the Guardian is remote, eg in a peer-to-peer protocol, then theres no
danger that the Mediators can steal its secrets. One could also have remote
Guardians with centralized Mediators. As it stands now, we have centralized
Mediators with local Guardian objects which I think are vulnerable. I dont
think thats an artifact of prototype vs production quality code, I think
its due to being local objects. In Java, you can get at the values of
private fields through reflection. Perhaps someone with Python experience
can explain what the situation is for Python.
…On Thu, Mar 11, 2021 at 10:23 PM Eleanor McMurtry ***@***.***> wrote:
In a practical deployment, the mediator shouldn't need a list of
guardians. Unsure if this is possible with the current ElectionGuard API as
the code I've written (see reply to Josh's comment earlier) does not use
the mediator class, but does it peer-to-peer.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#79 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEVZQFBDLZMLJFUZ4UQFI3TDGQMNANCNFSM4Y7HKPHQ>
.
|
Beta Was this translation helpful? Give feedback.
-
@benaloh and all, I've sketched a start of a slightly more detailed description of "we might as well use a mediator which collects all of the guardian data and publishes it as part of the election record. If something there doesn't match expectations, a guardian can say, "Wait, that's not the information I gave you to publish (or what you published is inconsistent with what you gave me during the key ceremony)." " It's at https://github.com/microsoft/electionguard/compare/keyGenVerificationIssue80 - I added it at the bottom of the existing key-generation description. (I also added some round numbers for reference, but didn't change the content of the existing key gen.) One thing that struck me is that it's probably worth checking at the initial stage (i.e. at key establishment, before voting starts) that the published secret shares |
Beta Was this translation helpful? Give feedback.
-
I'd like to start a discussion about tightening up the process of verifying that key generation has been conducted properly. At the moment the spec assumes broadcast, but this may not be feasible for all practical scenarios. Furthermore, we need to be a bit clearer about what happens when things seem to have gone wrong. For example, there's a crucial step in which the guardians publish commitments$K__{ij}$ , and at the moment this is implemented by point-to-point communication - we need more detail about how to detect and respond to guardians who send inconsistent commitments.
I think we all agree that we do not need (or want) a heavyweight consensus protocol - it should suffice to detect a malfunction and correct it manually. I think (though I am happy to be corrected) that this corresponds to what the MPC literature calls the covert adversary model.
I think we possibly want a key-gen-verification spec that describes exactly how the guardians verify that the process has been completed correctly, and what to do if they detect a problem.
Question: Is this something that the guardians verify among themselves, and then sign, or something that third parties verify on the election transcript, or both?
So the question is, under what assumptions, and given what clarifications to the verification spec, can we be confident that a passed verification implies a properly-generated key?
Here's my best-effort sketch:
Suggestion A
Suppose we can assume a reliable broadcast at the end, i.e. a final transcript that comes with some evidence that everyone is seeing the same one. (Exactly how this is achieved is a separate question.)
Then we would run the key generation protocol exactly as it is now specified, and each guardian would verify the ZKPs as now specified. Then, in the final round, each guardian would also check that the transcript matched their view, and in particular that the values broadcast on it matched the ones they had received through private channels - if it did, they would endorse the resulting key in some way; if it didn't they would object.
Question 1: does it suffice to check only the final key? Ans: probably not, but not certain.
Question 2: does it suffice to check the commitments K_{ij}? Ans: probably yes, but not certain.
Suggestion B
An alternative proposal, which both @benaloh and @eleanor-em have suggested, is to get each guardian to share their view of others' keys, something like this:
Before communication, guardians generate primary public key, associated polynomials, auxiliary public key.
Round 1
Guardian broadcast/share their public keys and key share commitments.
Round 2
Each guardian uses each others’ auxiliary key to share private key shares and their own views of each other’s keys.
Round 3
Each guardian “broadcast” confirmations or objections.
Round 4 (If objections are lodged)
Each challenged guardian broadcasts it key shares sent to challenge.
I think this could also work, and could be good for a scenario in which we don't assume a final broadcast step.
Both ideas need more careful consideration, and an understanding of how their assumptions can be met and what fraction of malicious participants they defend against.
Beta Was this translation helpful? Give feedback.
All reactions