Skip to content

Commit

Permalink
Typographical and Grammatical Fixes (#63)
Browse files Browse the repository at this point in the history
  • Loading branch information
THE-Spellchecker committed Apr 17, 2024
1 parent 0252dcf commit 7e3f484
Show file tree
Hide file tree
Showing 67 changed files with 126 additions and 126 deletions.
2 changes: 1 addition & 1 deletion FAQ.md
@@ -1,5 +1,5 @@
## FAQ
1. **how to read OracleGeneral trace,how to transfrom from csv to it? **
1. **how to read OracleGeneral trace,how to transform from csv to it? **
The [oracleGeneral](/libCacheSim/traceReader/customizedReader/oracle/oracleGeneralBin.h) trace is a binary trace, so you cannot direct read as txt file. Each request uses the following data struct
```c
struct {
Expand Down
4 changes: 2 additions & 2 deletions README.md
Expand Up @@ -148,7 +148,7 @@ Run the example traces with LRU eviction algorithm and 1GB cache size.
# besides using byte as the unit, you can also treat all objects having the same size, and the size is the number of objects
./bin/cachesim ../data/trace.vscsi vscsi lru 1000,16000 --ignore-obj-size 1

# use a csv trace, note the qutation marks when you have multiple options
# use a csv trace, note the quotation marks when you have multiple options
./bin/cachesim ../data/trace.csv csv lru 1gb -t "time-col=2, obj-id-col=5, obj-size-col=4"

# use a csv trace with more options
Expand Down Expand Up @@ -177,7 +177,7 @@ python3 plot_mrc_time.py --tracepath ../data/twitter_cluster52.csv --trace-forma
### Trace analysis
libCacheSim also has a trace analyzer that provides a lot of useful information about the trace.
And it is very fast, designed to work with billions of requests.
It also coms with a set of scripts to help you analyze the trace.
It also comes with a set of scripts to help you analyze the trace.
See [trace analysis](/doc/quickstart_traceAnalyzer.md) for more details.


Expand Down
4 changes: 2 additions & 2 deletions doc/quickstart_cachesim.md
Expand Up @@ -158,7 +158,7 @@ You can use `-a` or `--admission` to set the admission algorithm.
cachesim supports the following prefetching algorithms: OBL, Mithril, PG (and AMP is on the way).
You can use `-p` or `--prefetch` to set the prefetching algorithm.
```bash
# add a mithril to record object association information and fetch objests that are likely to be accessed in the future
# add a mithril to record object association information and fetch objects that are likely to be accessed in the future
./cachesim ../data/trace.vscsi vscsi lru 1gb -p Mithril
```

Expand All @@ -167,7 +167,7 @@ You can use `-p` or `--prefetch` to set the prefetching algorithm.
# change number of threads
./cachesim ../data/trace.vscsi vscsi lru 1gb --num-thread=4

# cap the number of requests raed from the trace
# cap the number of requests read from the trace
./cachesim ../data/trace.vscsi vscsi lru 1gb --num-req=1000000

# change output
Expand Down
2 changes: 1 addition & 1 deletion doc/quickstart_traceAnalyzer.md
Expand Up @@ -178,7 +178,7 @@ The first 10m requests of the Twitter cluster52 trace. The left column shows wal
# the popularity skewness ($\alpha$) is in the output of traceAnalyzer
# this plots the request count/freq over object rank
# note that measuring popularity plot does not make sense for very small traces and some block workloads
# and note that popularity is highly affected by the the layer of the cache hierarchy
# and note that popularity is highly affected by the layer of the cache hierarchy
python3 scripts/traceAnalysis/popularity.py ${dataname}.popularity
```

Expand Down
4 changes: 2 additions & 2 deletions example/cacheCluster/libketama/include/ketama.h
Expand Up @@ -127,13 +127,13 @@ int ketama_compare(mcs *a, mcs *b);
* \return The resulting hash. */
unsigned int ketama_hashi(const char *const inString);

/** \brief Hashinf function to 16 bytes char array using MD5.
/** \brief Hashing function to 16 bytes char array using MD5.
* \param inString The string that you want to hash.
* \param md5pword The resulting hash. */
void ketama_md5_digest(const char *const inString, unsigned char md5pword[16]);

/** \brief Error method for error checking.
* \return The latest error that occured. */
* \return The latest error that occurred. */
char *ketama_error();

#ifdef __cplusplus /* If this is a C++ compiler, end C linkage */
Expand Down
8 changes: 4 additions & 4 deletions example/cacheCluster/libketama/ketama.c
Expand Up @@ -449,14 +449,14 @@ ketama_create_continuum(key_t key, char *filename) {
memcpy(data + 1, &modtime, sizeof(time_t));
memcpy(data + 1 + sizeof(void *), &continuum, sizeof(mcs) * nump);

/* We detatch here because we will re-attach in read-only
/* We detach here because we will re-attach in read-only
* mode to actually use it. */
#ifdef SOLARIS
if ( shmdt( (char *) data ) == -1 )
#else
if (shmdt(data) == -1)
#endif
strcpy(k_error, "Error detatching from shared memory!");
strcpy(k_error, "Error detaching from shared memory!");

return 1;
}
Expand Down Expand Up @@ -699,14 +699,14 @@ ketama_create_continuum_libCacheSim(const key_t key,
memcpy(data + 1, &modtime, sizeof(time_t));
memcpy(data + 1 + sizeof(void *), &continuum, sizeof(mcs) * nump);

/* We detatch here because we will re-attach in read-only
/* We detach here because we will re-attach in read-only
* mode to actually use it. */
#ifdef SOLARIS
if ( shmdt( (char *) data ) == -1 )
#else
if (shmdt(data) == -1)
#endif
strcpy(k_error, "Error detatching from shared memory!");
strcpy(k_error, "Error detaching from shared memory!");

return 1;
}
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/bin/dep/oracleTraceGen/main.cpp
Expand Up @@ -47,7 +47,7 @@ int main(int argc, char *argv[]) {
reader_t *reader = cli::Util::create_reader(&cli_arg);

if (oarg.oracle_type == "freq") {
/* this is nolonger used */
/* this is no longer used */
std::vector<uint64_t> time_window{TIME_WINDOW};
OracleFreqTraceGen trace_gen(reader, cli_arg.opath, time_window, 300);
trace_gen.run();
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/ARC.c
Expand Up @@ -6,7 +6,7 @@
// cross checked with https://github.com/trauzti/cache/blob/master/ARC.py
// one thing not clear in the paper is whether delta and p is int or float,
// we used int as first,
// but the implemnetation above used float, so we have changed to use float
// but the implementation above used float, so we have changed to use float
//
//
// libCacheSim
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/ARCv0.c
Expand Up @@ -6,7 +6,7 @@
// cross checked with https://github.com/trauzti/cache/blob/master/ARCv0.py
// one thing not clear in the paper is whether delta and p is int or float,
// we used int as first,
// but the implemnetation above used float, so we have changed to use float
// but the implementation above used float, so we have changed to use float
//
//
// libCacheSim
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/Cacheus.c
Expand Up @@ -374,7 +374,7 @@ static void update_lr(cache_t *cache, const request_t *req) {

if (delta_lr != 0) {
int sign;
// Intuition: If hit rate is decreasing (deltla hit rate < 0)
// Intuition: If hit rate is decreasing (delta hit rate < 0)
// Learning rate is positive (delta_lr > 0)
// sign = -1 => decrease learning rate;
if (delta_hit_rate / delta_lr > 0)
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/Clock.c
Expand Up @@ -325,7 +325,7 @@ static void Clock_parse_params(cache_t *cache,
printf("current parameters: %s\n", Clock_current_params(cache, params));
exit(0);
} else {
ERROR("%s does not have parameter %s, example paramters %s\n",
ERROR("%s does not have parameter %s, example parameters %s\n",
cache->cache_name, key, Clock_current_params(cache, params));
exit(1);
}
Expand Down
4 changes: 2 additions & 2 deletions libCacheSim/cache/eviction/GLCache/GLCacheInternal.h
Expand Up @@ -10,7 +10,7 @@ typedef float pred_t;
typedef float train_y_t;

typedef enum {
LOGCACHE_TWO_ORACLE = 1, // oralce to select group and object
LOGCACHE_TWO_ORACLE = 1, // oracle to select group and object
LOGCACHE_LOG_ORACLE = 2, // oracle to select group, obj_score to select obj
LOGCACHE_ITEM_ORACLE = 3, // FIFO for seg selection, oracle for obj selection
LOGCACHE_LEARNED = 4,
Expand Down Expand Up @@ -174,7 +174,7 @@ typedef struct seg_sel {
/* parameters and state related to cache */
typedef struct {
/* user parameters */
int segment_size; /* in temrs of number of objects */
int segment_size; /* in terms of number of objects */
int n_merge;
// whether we merge consecutive segments (with the first segment has the
// lowest utility) or we merge non-consecutive segments based on ranking
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/GLCache/dataPrep.c
Expand Up @@ -38,7 +38,7 @@ static void dump_training_data(cache_t *cache) {

/* currently we snapshot segments after each training, then we collect segment
* utility for the snapshotted segments over time, when it is time to retrain,
* we used the snapshotted segment featuers and calculated utility to train a
* we used the snapshotted segment features and calculated utility to train a
* model, Because the snapshotted segments may be evicted over time, we move
* evicted segments to training buckets and keep ghost entries of evicted
* objects so that we can more accurately calculate utility. Because we keep
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/GLCache/eviction.c
Expand Up @@ -138,7 +138,7 @@ void GLCache_merge_segs(cache_t *cache, bucket_t *bucket, segment_t **segs) {
}

// called when there is no segment can be merged due to fragmentation
// different from clean_one_seg becausee this function also updates cache state
// different from clean_one_seg because this function also updates cache state
int evict_one_seg(cache_t *cache, segment_t *seg) {
VVERBOSE("req %lu, evict one seg id %d occupied size %lu/%lu\n", cache->n_req,
seg->seg_id, cache->occupied_byte, cache->cache_size);
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/GLCache/obj.h
Expand Up @@ -34,7 +34,7 @@ static inline void obj_hit_update(GLCache_params_t *params, cache_obj_t *obj,
bucket_t *bkt = &params->buckets[seg->bucket_id];
}

/* some internal state update bwhen an object is evicted */
/* some internal state update when an object is evicted */
static inline void obj_evict_update(cache_t *cache, cache_obj_t *obj) {
GLCache_params_t *params = cache->eviction_params;
segment_t *seg = obj->GLCache.segment;
Expand Down
4 changes: 2 additions & 2 deletions libCacheSim/cache/eviction/GLCache/segSel.c
Expand Up @@ -24,7 +24,7 @@ static inline int cmp_seg(const void *p1, const void *p2) {

// check whether there are min_evictable segments to evict
// consecutive indicates the merge eviction uses consecutive segments in the
// chain if not, it usees segments in ranked order and may not be consecutive
// chain if not, it uses segments in ranked order and may not be consecutive
static bool is_seg_evictable(segment_t *seg, int min_evictable,
bool consecutive) {
if (seg == NULL) return false;
Expand Down Expand Up @@ -184,7 +184,7 @@ void rank_segs(cache_t *cache) {

/* when cache size is small and space is fragmented between buckets,
* it is possible that there is no bucket with n_merge + 1 segments,
* in which case, we randonly pick one and uses eviction w/o merge */
* in which case, we randomly pick one and uses eviction w/o merge */
static bucket_t *select_one_seg_to_evict(cache_t *cache, segment_t **segs) {
GLCache_params_t *params = cache->eviction_params;
bucket_t *bucket = NULL;
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/GLCache/segment.c
Expand Up @@ -120,7 +120,7 @@ segment_t *allocate_new_seg(cache_t *cache, int bucket_id) {
return new_seg;
}

/* link a segment before antoher segment, this is used to place the merged
/* link a segment before another segment, this is used to place the merged
* segment in the same position as old segments */
void link_new_seg_before_seg(GLCache_params_t *params, bucket_t *bucket,
segment_t *old_seg, segment_t *new_seg) {
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LHD/LHD_Interface.cpp
Expand Up @@ -210,7 +210,7 @@ static cache_obj_t *LHD_insert(cache_t *cache, const request_t *req) {
/**
* @brief find an eviction candidate, but do not evict from the cache,
* and do not update the cache metadata
* note that eviction must evicts this object, so if we implment this function
* note that eviction must evicts this object, so if we implement this function
* and it uses random number, we must make sure that the same object is evicted
* when we call evict
*
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LHD/lhd.cpp
Expand Up @@ -24,7 +24,7 @@ LHD::LHD(int _associativity, int _admissions, cache_t* _cache)
}

// Initialize policy to ~GDSF by default.
// jason: why is this GDSF? and why the index of class is used in densit
// jason: why is this GDSF? and why the index of class is used in density
for (uint32_t c = 0; c < NUM_CLASSES; c++) {
for (age_t a = 0; a < MAX_AGE; a++) {
classes[c].hitDensities[a] = 1. * (c + 1) / (a + 1);
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LHD/lhd.hpp
Expand Up @@ -115,7 +115,7 @@ class LHD {
rank_t ewmaNumObjectsMass = 0.;

// how many objects had age > max age (this should almost never
// happen -- if you observe non-neglible overflows, something has
// happen -- if you observe non-negligible overflows, something has
// gone wrong with the age coarsening!!!)
uint64_t overflows = 0;

Expand Down
6 changes: 3 additions & 3 deletions libCacheSim/cache/eviction/LIRS.c
Expand Up @@ -306,7 +306,7 @@ static cache_obj_t *LIRS_insert(cache_t *cache, const request_t *req) {
params->hirs_count += inserted_obj_s->obj_size;
cache->occupied_byte += inserted_obj_s->obj_size + cache->obj_md_size;
} else {
// the curcumstance is same as accessing an HIR non-resident not in S
// the circumstance is same as accessing an HIR non-resident not in S
// insert the req into S and place it on the top of S
cache_obj_t *inserted_obj_s = params->LRU_s->insert(params->LRU_s, req);
inserted_obj_s->LIRS.is_LIR = false;
Expand Down Expand Up @@ -378,7 +378,7 @@ static void LIRS_evict(cache_t *cache, const request_t *req) {
if (params->lirs_count + req->obj_size > params->lirs_limit &&
params->hirs_count + req->obj_size > params->hirs_limit) {
// when both LIR and HIR block sets are full,
// the curcumstance is same as accessing an HIR non-resident not in S
// the circumstance is same as accessing an HIR non-resident not in S
// remove the HIR resident at the front of Q
evictHIR(cache);
}
Expand Down Expand Up @@ -491,7 +491,7 @@ bool LIRS_can_insert(cache_t *cache, const request_t *req) {
if (params->lirs_count + req->obj_size > params->lirs_limit &&
params->hirs_count + req->obj_size > params->hirs_limit) {
// when both LIR and HIR block sets are full,
// the curcumstance is same as accessing an HIR non-resident not in S
// the circumstance is same as accessing an HIR non-resident not in S
while (params->hirs_count + req->obj_size > params->hirs_limit) {
evictHIR(cache);
}
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LRB/LRB_Interface.cpp
Expand Up @@ -220,7 +220,7 @@ static cache_obj_t *LRB_insert(cache_t *cache, const request_t *req) {
/**
* @brief find an eviction candidate, but do not evict from the cache,
* and do not update the cache metadata
* note that eviction must evicts this object, so if we implment this function
* note that eviction must evicts this object, so if we implement this function
* and it uses random number, we must make sure that the same object is evicted
* when we call evict
*
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LRB/cache.h
Expand Up @@ -75,7 +75,7 @@ namespace webcachesim {
static std::unique_ptr<Cache> create_unique(std::string name) {
std::unique_ptr<Cache> Cache_instance;
if (get_factory_instance().count(name) != 1) {
std::cerr << "unkown cacheType" << std::endl;
std::cerr << "unknown cacheType" << std::endl;
return nullptr;
}
Cache_instance = move(get_factory_instance()[name]->create_unique());
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LRB/lrb.h
Expand Up @@ -299,7 +299,7 @@ class LRBCache : public Cache {
BoosterHandle booster = nullptr;

unordered_map<string, string> training_params = {
//don't use alias here. C api may not recongize
//don't use alias here. C api may not recognize
{"boosting", "gbdt"},
{"objective", "regression"},
{"num_iterations", "32"},
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/LRUv0.c
Expand Up @@ -184,7 +184,7 @@ static cache_obj_t *LRUv0_insert(cache_t *cache, const request_t *req) {
cache->occupied_byte += req->obj_size + cache->obj_md_size;
cache_obj_t *cache_obj = create_cache_obj_from_request(req);
// TODO: use SList should be more memory efficient than queue which uses
// doubly-linklist under the hood
// doubly-linkedlist under the hood
GList *node = g_list_alloc();
node->data = cache_obj;
g_queue_push_tail_link(LRUv0_params->list, node);
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/QDLP.c
@@ -1,5 +1,5 @@
//
// Quick demotion + lazy promoition v1
// Quick demotion + lazy promotion v1
//
// 20% FIFO + ARC
// insert to ARC when evicting from FIFO
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/S3FIFOd.c
@@ -1,5 +1,5 @@
//
// Quick demotion + lazy promoition v2
// Quick demotion + lazy promotion v2
//
// FIFO + Clock
// the ratio of FIFO is decided dynamically
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/TwoQ.c
@@ -1,5 +1,5 @@
//
// Quick demotion + lazy promoition v1
// Quick demotion + lazy promotion v1
//
// 20% Ain + ARC
// insert to ARC when evicting from Ain
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/fifo/LP_TwoQ.c
@@ -1,5 +1,5 @@
//
// Quick demotion + lazy promoition v1
// Quick demotion + lazy promotion v1
//
// 20% Ain + ARC
// insert to ARC when evicting from Ain
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/priv/MClock.c
Expand Up @@ -3,7 +3,7 @@
// there are m hands, each points to i/m position of the clock
// there is no action on hit,
// on miss, one of the hands is selected to evict based on next access distance
// (Belady) then the hands are reset to correponding positions
// (Belady) then the hands are reset to corresponding positions
//
//
// mClock.c
Expand Down
2 changes: 1 addition & 1 deletion libCacheSim/cache/eviction/unsupported/AMP.c
Expand Up @@ -236,7 +236,7 @@ void _AMP_evict(cache_t *AMP, request_t *cp) {
void *_AMP_evict_with_return(cache_t *AMP, request_t *cp) {
/** return a pointer points to the data that being evicted,
* it can be a pointer pointing to gint64 or a pointer pointing to char*
* it is the user's responsbility to g_free the pointer
* it is the user's responsibility to g_free the pointer
**/

struct AMP_params *AMP_params = (struct AMP_params *)(AMP->cache_params);
Expand Down
6 changes: 3 additions & 3 deletions libCacheSim/cache/eviction/unsupported/Mithril.c
Expand Up @@ -182,7 +182,7 @@ static inline void _Mithril_record_entry(cache_t *Mithril, request_t *cp) {
/* if it does not record at each request, check whether it is hit or miss */
if (Mithril_params->cache->check(Mithril_params->cache, cp)) return;

/* check it is sequtial or not */
/* check it is sequential or not */
if (Mithril_params->sequential_type && _Mithril_check_sequential(Mithril, cp))
return;

Expand Down Expand Up @@ -217,7 +217,7 @@ static inline void _Mithril_record_entry(cache_t *Mithril, request_t *cp) {
g_hash_table_insert(rmtable->hashtable, str,
GINT_TO_POINTER(rmtable->rtable_cur_row));
} else {
ERROR("unsupport data obj_id_type in _Mithril_record_entry\n");
ERROR("unsupported data obj_id_type in _Mithril_record_entry\n");
}

row_in_rtable[1] = ADD_TS(row_in_rtable[1], Mithril_params->ts);
Expand Down Expand Up @@ -328,7 +328,7 @@ static inline void _Mithril_record_entry(cache_t *Mithril, request_t *cp) {
ERROR("removing from rmtable failed for mining table entry\n");

/** for dataType c, now the pointer to string has been freed,
* so mining table entry is incorrent,
* so mining table entry is incorrect,
* but mining table entry will be deleted, so it is OK
*/

Expand Down

0 comments on commit 7e3f484

Please sign in to comment.