Skip to content
This repository has been archived by the owner on Nov 2, 2018. It is now read-only.

Use BlockHeight to calculate uptime/downtime instead of timestamps. #2908

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

mikkeljuhl
Copy link
Contributor

@mikkeljuhl mikkeljuhl commented Mar 28, 2018

Use BlockHeight to calculate uptime/downtime instead of timestamps. timePassed is calculated by checking how many blocks have passed * 10 minutes, instead of using the timestamps.
This is more reliable, if people should change their system time or things to that effect.

This is a move towards making a GlobalAverageUptime.

@ChrisSchinnerl @DavidVorick

Copy link
Contributor

@ChrisSchinnerl ChrisSchinnerl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good to me overall. There are only a few things that could be refactored. We should wait for @DavidVorick to see if he also approves of those changes.

@@ -119,8 +119,9 @@ type HostDBEntry struct {

// HostDBScan represents a single scan event.
type HostDBScan struct {
Timestamp time.Time `json:"timestamp"`
Success bool `json:"success"`
Timestamp time.Time `json:"timestamp"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DavidVorick is there still a need to keep the Timestamp?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to go ahead and say yes, we should keep the timestamp in the scan, just in case we find some use for it later. We may (or may not) phase it out eventually, but I can think of a few possible reasons we may want it at some point.

@@ -311,6 +312,7 @@ func (hdb *HostDB) uptimeAdjustments(entry modules.HostDBEntry) float64 {
downtime := entry.HistoricDowntime
uptime := entry.HistoricUptime
recentTime := entry.ScanHistory[0].Timestamp
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we decide to get rid of the Timestamp field we can probably also get rid of that line and check the order later using the blockheight instead. We should wait for @DavidVorick to give some input on this though.

if recentSuccess {
uptime += scan.Timestamp.Sub(recentTime)
uptime += time.Duration(timePassed * decay)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code here should do pretty much the same as the one in updateEntry. At a first glance it seems like uptime and downtime themselves are not decayed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I think about it we can probably also change uptime downtime historicUptime and historicDowntime to be of type blockheight instead of Duration if we decide to do everything using blocks.

When we calculate the ratio later we would just cast them to float64.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't turn historicUptime and historicDowntime into type BlockHeight.
Since, according to David we should time (and not block height) should be used for things (such as hosts) that lives outside the protocol


blockDiff := scan.BlockHeight - recentBlockHeight
timePassed := float64(time.Duration(blockDiff) * 10 * time.Minute)
halftime := float64(1 * time.Hour * 24 * 30)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

halftime should be in modules/renter/hostdb/consts.go instead of being declared inline in 2 files.

@mikkeljuhl mikkeljuhl force-pushed the feature/uptime branch 2 times, most recently from 36759d6 to d21257b Compare April 3, 2018 18:25
@mikkeljuhl
Copy link
Contributor Author

mikkeljuhl commented Apr 3, 2018

@ChrisSchinnerl I've rebased and addressed your comments, please review 👍
@DavidVorick

@@ -34,6 +34,9 @@ const (
// scans start getting compressed.
minScans = 12

// halftimeUpDownTime is the halftime used to decay the host up and downtime
halftimeUpDownTime = 30 * 24 * time.Hour
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the word that you want here is halflife. I'd probably call this uptimeHalflife

@@ -366,6 +362,22 @@ func (hdb *HostDB) uptimeAdjustments(entry modules.HostDBEntry) float64 {
return math.Pow(uptimeRatio, exp)
}

// decayHostUpOrDowntime decays a host's historic uptime or historic downtime.
// It also adds the new block height to the historic uptime or historic downtime.
func decayUptimeOrDowntime(entry *modules.HostDBEntry, scan modules.HostDBScan, recentScan modules.HostDBScan) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

functions that don't have any receiver should be moved to the top of the file, and then sorted alphabetically among eachother.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, I would just name this function decayUptime, since we consider the cluster of uptime fields collectively to be the uptime fields.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as pointed out above, this function needs to be moved to scan.go

@@ -2,6 +2,9 @@ package main

import (
"fmt"
"github.com/NebulousLabs/Sia/modules"
"github.com/NebulousLabs/Sia/node/api"
"github.com/spf13/cobra"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

were these added by mistake? There are no other changes to this file.

@@ -13,6 +13,7 @@ import (
"github.com/NebulousLabs/Sia/node/api/client"
"github.com/NebulousLabs/Sia/types"

"github.com/NebulousLabs/Sia/node/api"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

were these added by mistake? There are no other changes to this file.

recentTime = scan.Timestamp
recentSuccess = scan.Success

decayUptimeOrDowntime(&entry, scan, recentScan)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the wrong place to apply decay - you are applying decay every time that the weight is being calculated. But we can calculate the weight for a lot of different reasons. We should only be applying decay after we scan the host. That means the decay algorithm should also be moved to scan.go.

} else {
entry.HistoricDowntime *= decay
entry.HistoricDowntime += timePassed * decay
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you need to decay both the historic uptime and the historic downtime any time that you apply a decay. The decayed time passed should be added only to the relevant field. The result should look something like this:

entry.HistoricUptime *= decay
entry.HistoricDowntime *= decay
if recentScan.Success {
    entry.HistoricUptime += timePassed*decay
} else {
    entry.HistoricDowntime += timePassed*decay
}

} else {
newEntry.HistoricDowntime += timePassed
}
decayUptimeOrDowntime(&newEntry, newEntry.ScanHistory[1], newEntry.ScanHistory[0])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upon further reflection (sorry for waffling around), the name should really be updateUptime, because you aren't just applying a decay you are also adding new information.

@mikkeljuhl
Copy link
Contributor Author

@DavidVorick thanks for reviewing, your comments have been addressed.

@tbenz9
Copy link
Collaborator

tbenz9 commented May 1, 2018

@mikkeljuhl Looks like a few of the regression tests are failing. Do the tests pass when run locally?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants