Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix clippy warnings #500

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
13 changes: 12 additions & 1 deletion backend/common/src/assign_id.rs
Expand Up @@ -31,6 +31,17 @@ pub struct AssignId<Id, Details> {
_id_type: std::marker::PhantomData<Id>,
}

impl<Id, Details> Default for AssignId<Id, Details>
where
Details: Eq + Hash,
Id: From<usize> + Copy,
usize: From<Id>,
{
fn default() -> Self {
Self::new()
}
}
Comment on lines +34 to +43
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess #[derive(Default)] didn't work? I'd prefer that if possible :)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BiMap doesn't impl Default


impl<Id, Details> AssignId<Id, Details>
where
Details: Eq + Hash,
Expand Down Expand Up @@ -68,7 +79,7 @@ where

pub fn remove_by_details(&mut self, details: &Details) -> Option<Id> {
self.mapping
.remove_by_right(&details)
.remove_by_right(details)
.map(|(id, _)| id.into())
}

Expand Down
28 changes: 14 additions & 14 deletions backend/common/src/byte_size.rs
Expand Up @@ -84,20 +84,20 @@ mod test {
("20 kB", 20 * 1000),
("20K", 20 * 1000),
(" 20k", 20 * 1000),
("1MB", 1 * 1000 * 1000),
("1M", 1 * 1000 * 1000),
("1m", 1 * 1000 * 1000),
("1 m", 1 * 1000 * 1000),
("1GB", 1 * 1000 * 1000 * 1000),
("1G", 1 * 1000 * 1000 * 1000),
("1g", 1 * 1000 * 1000 * 1000),
("1KiB", 1 * 1024),
("1Ki", 1 * 1024),
("1MiB", 1 * 1024 * 1024),
("1Mi", 1 * 1024 * 1024),
("1GiB", 1 * 1024 * 1024 * 1024),
("1Gi", 1 * 1024 * 1024 * 1024),
(" 1 Gi ", 1 * 1024 * 1024 * 1024),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are just silly, so much more readable before :)

perhaps we could should add a clippy config for style lints like this one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be up for a clippy config being added to telemetry so that we can keep more of what we like; I'm generally happy with these changes but there isn't much value in adding Default impls and a couple of other nits I have (but I htink nits are allowed since that's what clippy is all about!)

("1MB", 1000 * 1000),
("1M", 1000 * 1000),
("1m", 1000 * 1000),
("1 m", 1000 * 1000),
("1GB", 1000 * 1000 * 1000),
("1G", 1000 * 1000 * 1000),
("1g", 1000 * 1000 * 1000),
("1KiB", 1024),
("1Ki", 1024),
("1MiB", 1024 * 1024),
("1Mi", 1024 * 1024),
("1GiB", 1024 * 1024 * 1024),
("1Gi", 1024 * 1024 * 1024),
(" 1 Gi ", 1024 * 1024 * 1024),
];

for (s, expected) in cases {
Expand Down
2 changes: 2 additions & 0 deletions backend/common/src/dense_map.rs
Expand Up @@ -23,6 +23,7 @@
/// Items are keyed by an Id, which can be any type you wish, but
/// must be convertible to/from a `usize`. This promotes using a
/// custom Id type to talk about items in the map.
#[derive(Default)]
pub struct DenseMap<Id, T> {
/// List of retired indexes that can be re-used
retired: Vec<usize>,
Expand Down Expand Up @@ -108,6 +109,7 @@ where
.filter_map(|(id, item)| Some((id.into(), item.as_mut()?)))
}

#[allow(clippy::should_implement_trait)]
pub fn into_iter(self) -> impl Iterator<Item = (Id, T)> {
self.items
.into_iter()
Expand Down
2 changes: 1 addition & 1 deletion backend/common/src/http_utils.rs
Expand Up @@ -134,7 +134,7 @@ fn generate_websocket_accept_key<'a>(key: &[u8], buf: &'a mut [u8; 32]) -> &'a [
digest.update(KEY);
let d = digest.finalize();

let n = base64::encode_config_slice(&d, base64::STANDARD, buf);
let n = base64::encode_config_slice(d, base64::STANDARD, buf);
&buf[..n]
}

Expand Down
1 change: 1 addition & 0 deletions backend/common/src/multi_map_unique.rs
Expand Up @@ -19,6 +19,7 @@ use std::hash::Hash;
/// A map where each key can contain multiple values. We enforce that a value
/// only ever belongs to one key at a time (the latest key it was inserted
/// against).
#[derive(Default)]
pub struct MultiMapUnique<K, V> {
value_to_key: HashMap<V, K>,
key_to_values: HashMap<K, HashSet<V>>,
Expand Down
2 changes: 1 addition & 1 deletion backend/common/src/node_message.rs
Expand Up @@ -134,7 +134,7 @@ mod tests {
// know whether things can (de)serialize to bincode or not at runtime without failing unless
// we test the different types we want to (de)serialize ourselves. We just need to test each
// type, not each variant.
fn bincode_can_serialize_and_deserialize<'de, T>(item: T)
fn bincode_can_serialize_and_deserialize<T>(item: T)
where
T: Serialize + serde::de::DeserializeOwned,
{
Expand Down
4 changes: 2 additions & 2 deletions backend/common/src/node_types.rs
Expand Up @@ -132,7 +132,7 @@ impl Serialize for NodeIO {
}

/// Concise block details
#[derive(Deserialize, Serialize, Debug, Clone, Copy, PartialEq)]
#[derive(Deserialize, Serialize, Debug, Clone, Copy, PartialEq, Eq)]
pub struct Block {
pub hash: BlockHash,
pub height: BlockNumber,
Expand Down Expand Up @@ -208,7 +208,7 @@ impl<'de> Deserialize<'de> for NodeLocation {
}

/// Verbose block details
#[derive(Debug, Clone, Copy, PartialEq)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct BlockDetails {
pub block: Block,
pub block_time: u64,
Expand Down
14 changes: 8 additions & 6 deletions backend/common/src/rolling_total.rs
Expand Up @@ -27,11 +27,17 @@ pub struct RollingTotalBuilder<Time: TimeSource = SystemTimeSource> {
time_source: Time,
}

impl Default for RollingTotalBuilder {
fn default() -> Self {
Self::new()
}
}

impl RollingTotalBuilder {
/// Build a [`RollingTotal`] struct. By default,
/// the window_size is 10s, the granularity is 1s,
/// and system time is used.
pub fn new() -> RollingTotalBuilder<SystemTimeSource> {
pub fn new() -> Self {
Self {
window_size_multiple: 10,
granularity: Duration::from_secs(1),
Expand Down Expand Up @@ -253,11 +259,7 @@ mod test {
// Regardless of the exact time that's elapsed, we'll end up with buckets that
// are exactly granularity spacing (or multiples of) apart.
assert_eq!(
rolling_total
.averages()
.into_iter()
.copied()
.collect::<Vec<_>>(),
rolling_total.averages().iter().copied().collect::<Vec<_>>(),
vec![
(start_time, 1),
(start_time + granularity, 2),
Expand Down
4 changes: 2 additions & 2 deletions backend/common/src/ws_client/connect.rs
Expand Up @@ -115,7 +115,7 @@ impl Connection {
let msg = match message_data {
soketto::Data::Binary(_) => Ok(RecvMessage::Binary(data)),
soketto::Data::Text(_) => String::from_utf8(data)
.map(|s| RecvMessage::Text(s))
.map(RecvMessage::Text)
.map_err(|e| e.into()),
};

Expand Down Expand Up @@ -249,7 +249,7 @@ pub async fn connect(uri: &http::Uri) -> Result<Connection, ConnectError> {
let socket = may_connect_tls(socket, host, scheme == "https" || scheme == "wss").await?;

// Establish a WS connection:
let mut client = Client::new(socket.compat(), host, &path);
let mut client = Client::new(socket.compat(), host, path);
let (ws_to_connection, ws_from_connection) = match client.handshake().await? {
ServerResponse::Accepted { .. } => client.into_builder().finish(),
ServerResponse::Redirect { status_code, .. } => {
Expand Down
6 changes: 5 additions & 1 deletion backend/common/src/ws_client/receiver.rs
Expand Up @@ -48,7 +48,7 @@ impl Stream for Receiver {
mut self: std::pin::Pin<&mut Self>,
cx: &mut std::task::Context<'_>,
) -> std::task::Poll<Option<Self::Item>> {
self.inner.poll_next_unpin(cx).map_err(|e| e.into())
self.inner.poll_next_unpin(cx).map_err(|e| e)
i1i1 marked this conversation as resolved.
Show resolved Hide resolved
}
}

Expand All @@ -68,4 +68,8 @@ impl RecvMessage {
RecvMessage::Text(s) => s.len(),
}
}

pub fn is_empty(&self) -> bool {
self.len() == 0
}
}
2 changes: 1 addition & 1 deletion backend/telemetry_core/benches/subscribe.rs
Expand Up @@ -103,7 +103,7 @@ pub fn benchmark_subscribe_speed(c: &mut Criterion) {
let start = Instant::now();
loop {
let msgs = feeds[0].1.recv_feed_messages_once().await.unwrap();
if msgs.iter().find(|&m| m == &finished).is_some() {
if msgs.iter().any(|m| m == &finished) {
break;
}
}
Expand Down
28 changes: 14 additions & 14 deletions backend/telemetry_core/src/aggregator/inner_loop.rs
Expand Up @@ -55,13 +55,13 @@ pub enum FromShardWebsocket {
Add {
local_id: ShardNodeId,
ip: std::net::IpAddr,
node: common::node_types::NodeDetails,
node: Box<common::node_types::NodeDetails>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::mem::size_of::<NodeDetails>() == 111

I think that is small enough to avoid to Box this one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, but enum variant is around 369 bytes:

warning: large size difference between variants
  --> telemetry_core/src/aggregator/inner_loop.rs:48:1
   |
48 |   / pub enum FromShardWebsocket {
49 |   |     /// When the socket is opened, it'll send this first
50 |   |     /// so that we have a way to communicate back to it.
51 |   |     Initialize {
...    |
55 | / |     Add {
56 | | |         local_id: ShardNodeId,
57 | | |         ip: std::net::IpAddr,
58 | | |         node: common::node_types::NodeDetails,
59 | | |         genesis_hash: common::node_types::BlockHash,
60 | | |     },
   | |_|_____- the largest variant contains at least 369 bytes
61 |   |     /// Update/pass through details about a node.
62 | / |     Update {
63 | | |         local_id: ShardNodeId,
64 | | |         payload: Box<node_message::Payload>,
65 | | |     },
   | |_|_____- the second-largest variant contains at least 16 bytes
...    |
69 |   |     Disconnected,
70 |   | }
   |   |_^ the entire enum is at least 376 bytes
   |
   = note: `#[warn(clippy::large_enum_variant)]` on by default
   = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#large_enum_variant
help: consider boxing the large fields to reduce the total size of the enum
   |
58 |         node: Box<common::node_types::NodeDetails>,
   |               ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

genesis_hash: common::node_types::BlockHash,
},
/// Update/pass through details about a node.
Update {
local_id: ShardNodeId,
payload: node_message::Payload,
payload: Box<node_message::Payload>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

std::mem::size_of::<Payload>() == 344, I'm still in favor to avoid to box this but not sure where to cut the threshold without benching it ^^.

Clippy seems to use 200 as default, https://rust-lang.github.io/rust-clippy/master/#large_enum_variant

},
/// Tell the aggregator that a node has been removed when it disconnects.
Remove { local_id: ShardNodeId },
Expand Down Expand Up @@ -139,7 +139,7 @@ impl FromStr for FromFeedWebsocket {
"subscribe" => Ok(FromFeedWebsocket::Subscribe {
chain: value.parse()?,
}),
_ => return Err(anyhow::anyhow!("Command {} not recognised", cmd)),
_ => Err(anyhow::anyhow!("Command {} not recognised", cmd)),
}
}
}
Expand Down Expand Up @@ -235,16 +235,16 @@ impl InnerLoop {
// ignore node updates if we have too many messages to handle, in an attempt
// to reduce the queue length back to something reasonable, lest it get out of
// control and start consuming a load of memory.
if metered_tx.len() > max_queue_len {
if matches!(
if metered_tx.len() > max_queue_len
&& matches!(
msg,
ToAggregator::FromShardWebsocket(.., FromShardWebsocket::Update { .. })
) {
// Note: this wraps on overflow (which is probably the best
// behaviour for graphing it anyway)
dropped_messages.fetch_add(1, Ordering::Relaxed);
continue;
}
)
{
// Note: this wraps on overflow (which is probably the best
// behaviour for graphing it anyway)
dropped_messages.fetch_add(1, Ordering::Relaxed);
continue;
}

if let Err(e) = metered_tx.send(msg) {
Expand Down Expand Up @@ -327,7 +327,7 @@ impl InnerLoop {
} => {
// Conditionally modify the node's details to include the IP address.
node.ip = self.expose_node_ips.then_some(ip.to_string().into());
match self.node_state.add_node(genesis_hash, node) {
match self.node_state.add_node(genesis_hash, *node) {
state::AddNodeResult::ChainOnDenyList => {
if let Some(shard_conn) = self.shard_channels.get_mut(&shard_conn_id) {
let _ = shard_conn.send(ToShardWebsocket::Mute {
Expand Down Expand Up @@ -359,7 +359,7 @@ impl InnerLoop {
let mut feed_messages_for_chain = FeedMessageSerializer::new();
feed_messages_for_chain.push(feed_message::AddedNode(
node_id.get_chain_node_id().into(),
&details.node,
details.node,
));
self.finalize_and_broadcast_to_chain_feeds(
&genesis_hash,
Expand Down Expand Up @@ -411,7 +411,7 @@ impl InnerLoop {

let mut feed_message_serializer = FeedMessageSerializer::new();
self.node_state
.update_node(node_id, payload, &mut feed_message_serializer);
.update_node(node_id, *payload, &mut feed_message_serializer);

if let Some(chain) = self.node_state.get_chain_by_node_id(node_id) {
let genesis_hash = chain.genesis_hash();
Expand Down
1 change: 1 addition & 0 deletions backend/telemetry_core/src/aggregator/mod.rs
Expand Up @@ -14,6 +14,7 @@
// You should have received a copy of the GNU General Public License
// along with this program. If not, see <https://www.gnu.org/licenses/>.

#[allow(clippy::module_inception)]
mod aggregator;
mod aggregator_set;
mod inner_loop;
Expand Down
4 changes: 2 additions & 2 deletions backend/telemetry_core/src/find_location.rs
Expand Up @@ -43,7 +43,7 @@ where
cache.insert(
Ipv4Addr::new(127, 0, 0, 1).into(),
Arc::new(NodeLocation {
latitude: 52.516_6667,
latitude: 52.516_666,
niklasad1 marked this conversation as resolved.
Show resolved Hide resolved
longitude: 13.4,
city: "Berlin".into(),
}),
Expand Down Expand Up @@ -103,7 +103,7 @@ impl Locator {
return cached_loc;
}

let City { city, location, .. } = self.city.lookup(ip.into()).ok()?;
let City { city, location, .. } = self.city.lookup(ip).ok()?;
let city = city
.as_ref()?
.names
Expand Down
31 changes: 17 additions & 14 deletions backend/telemetry_core/src/main.rs
Expand Up @@ -279,12 +279,15 @@ where
genesis_hash,
} => FromShardWebsocket::Add {
ip,
node,
node: Box::new(node),
genesis_hash,
local_id,
},
internal_messages::FromShardAggregator::UpdateNode { payload, local_id } => {
FromShardWebsocket::Update { local_id, payload }
FromShardWebsocket::Update {
local_id,
payload: Box::new(payload),
}
}
internal_messages::FromShardAggregator::RemoveNode { local_id } => {
FromShardWebsocket::Remove { local_id }
Expand Down Expand Up @@ -525,34 +528,34 @@ async fn return_prometheus_metrics(aggregator: AggregatorSet) -> Response<hyper:
use std::fmt::Write;
let mut s = String::new();
for (idx, m) in metrics.iter().enumerate() {
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_connected_feeds{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_connected_feeds{{aggregator=\"{}\"}} {} {}",
idx, m.connected_feeds, m.timestamp_unix_ms
);
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_connected_nodes{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_connected_nodes{{aggregator=\"{}\"}} {} {}",
idx, m.connected_nodes, m.timestamp_unix_ms
);
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_connected_shards{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_connected_shards{{aggregator=\"{}\"}} {} {}",
idx, m.connected_shards, m.timestamp_unix_ms
);
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_chains_subscribed_to{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_chains_subscribed_to{{aggregator=\"{}\"}} {} {}",
idx, m.chains_subscribed_to, m.timestamp_unix_ms
);
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_subscribed_feeds{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_subscribed_feeds{{aggregator=\"{}\"}} {} {}",
idx, m.subscribed_feeds, m.timestamp_unix_ms
);
let _ = write!(
let _ = writeln!(
&mut s,
"telemetry_core_total_messages_to_feeds{{aggregator=\"{}\"}} {} {}\n",
"telemetry_core_total_messages_to_feeds{{aggregator=\"{}\"}} {} {}",
idx, m.total_messages_to_feeds, m.timestamp_unix_ms
);
let _ = write!(
Expand Down