Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Project dependencies may have API risk issues #76

Open
PyDeps opened this issue Oct 27, 2022 · 0 comments
Open

Project dependencies may have API risk issues #76

PyDeps opened this issue Oct 27, 2022 · 0 comments

Comments

@PyDeps
Copy link

PyDeps commented Oct 27, 2022

Hi, In neuron_poker, inappropriate dependency versioning constraints can cause risks.

Below are the dependencies and version constraints that the project is using

pandas
pytest
pylint
pydocstyle
gym
numpy
matplotlib
pyglet
keras-rl2
docopt
tensorflow==2.3.2
tensorboard
pybind11
cppimport

The version constraint == will introduce the risk of dependency conflicts because the scope of dependencies is too strict.
The version constraint No Upper Bound and * will introduce the risk of the missing API Error because the latest version of the dependencies may remove some APIs.

After further analysis, in this project,
The version constraint of dependency pandas can be changed to >=0.7.0,<=1.4.2.
The version constraint of dependency numpy can be changed to >=1.5.0,<=1.23.0rc3.
The version constraint of dependency pyglet can be changed to >=1.3.0rc2,<=1.4.11.

The above modification suggestions can reduce the dependency conflicts as much as possible,
and introduce the latest version as much as possible without calling Error in the projects.

The invocation of the current project includes all the following methods.

The calling methods from the pandas
pandas.concat
The calling methods from the numpy
distutils.core.setup
The calling methods from the pyglet
pyglet.graphics.vertex_list
pyglet.clock.tick
datetime.date.today
time.strftime
pyglet.graphics.draw
The calling methods from the all methods
ValueError
self.player_cycle.deactivate_current
agents.agent_keras_rl_dqn.Player.train
collections.Counter
float
self.player_cycle.next_player
self._distribute_cards
numpy.ceil
arr3.arr2.arr1.np.concatenate.flatten
logging.getLogger.setLevel
gym.envs.registration.register
pyglet.graphics.vertex_list
MonteCarlo.get_two_short_notation
type
self.player_cycle.next_dealer
tools.hand_evaluator.SUITS_ORIGINAL.find
pyglet.clock.tick
info.keys
numpy.maximum
cppimport.imp
self.player_data.__dict__.values
self.get_multiplecards
self._save_funds_history
Memoise
matplotlib.pyplot.show
numpy.random.seed
numpy.mean
logging.getLogger.error
numpy.array
join
this_player_action_space.intersection
format
numpy.sum
time.time
self.distribute_cards
pandas.Series
logging.getLogger.removeHandler
gym.make.add_player
SelfPlay.random_agents
player_alive.append
preflop_state.get_reverse_sheetname
logging.StreamHandler.setFormatter
MonteCarlo
self._get_legal_moves
ui_action_and_signals.signal_progressbar_increase.emit
gym_env.rendering.PygletWindow
self.dqn.compile
winner_in_episodes.pd.Series.value_counts
traceback.format_exception
self._get_environment
os.path.isfile
self.community_data.__dict__.values
logging.getLogger.removeFilter
tensorflow.compat.v1.disable_eager_execution
max
self.get_four_of_a_kind
self._end_round
self._create_card_deck
self._end_hand
self.winnerCardTypeList.items
pyglet.graphics.vertex_list.draw
self.config.read
deck.index
self._award_winner
multiprocessing.pool.ThreadPool.map
self._close_round
flatten
get_multiprocessing_config
logging.getLogger.warning
self.preflop_equities.items
Action
hasattr
player.cards.append
os.getenv
self.get_straightflush
eval_best_hand
self.player_cycle.deactivate_player
numpy.squeeze
agents.agent_keras_rl_dqn.Player.initiate_agent
get_config
SelfPlay.dqn_train_keras_rl
numpy.argsort
datetime.date.today
pickle.dumps
self.get_two_short_notation
pool_fn
self.model.add
numpy.diff
numpy.sort
self.viewer.text
self.legal_moves.append
pyglet.text.Label
pyglet.gl.glClear
gym.spaces.Discrete
agents.agent_custom_q1.Player
self._calculate_reward
preflop_state.get_rangecards_from_sheetname
pyglet.graphics.draw.draw
cls.Singleton.super.__call__
os.path.abspath
self._process_decision
self._next_dealer
self.callers.append
datetime.date.today.strftime
self.model.load_weights
os.environ.get
self.viewer.circle
self.get_highcard
hand._.r.r.hand.join.count.r.CARD_RANKS_ORIGINAL.find.items
setuptools.find_packages
self.player_cycle.mark_checker
PlayerShell
Exception
numpy.all
str
self._game_over
copy.copy.append
super
compiled_args.append
numpy.argmax
numpy.random.randint
kwargs.items
self._check_game_over
t.PlayerCardList_and_others.append
self.current_player.agent_obj.action
logging.Formatter
numpy.round
self.table_cards.append
PlayerCycle
logging.handlers.RotatingFileHandler.setLevel
self.deck.append
CustomProcessor
int
tensorflow.keras.models.model_from_json
_calc_score
self.raisers.append
numpy.logical_and
self.player_cycle.mark_out_of_cash_but_contributed
random.choice
agents.agent_keras_rl_dqn.TrumpPolicy
os.path.dirname
Evaluation.run_evaluation
SelfPlay.key_press_agents
StageData
sum
self.calc_score
self.func
numpy.tile
PygletWindow.update
self.card_to_num
numpy.minimum
SelfPlay
self.viewer.reset
json.dump
highCardsVal.sum.sum
self.player_cycle.mark_folder
isinstance
self._agent_is_autoplay
deck.pop
numpy.stack
player_cards_with_table_cards.index
agents.agent_keypress.Player
self.player_cycle.update_alive
tools.helper.init_logger
print
hand.join.count
self.player_cycle.new_round_reset
logging.getLogger.info
self.current_player.actions.append
numpy.delete
get_config.get
self.create_card_deck.index
self.render
pyglet.window.Window
cards_combined_array.append
multiprocessing.pool.ThreadPool.starmap
self.step
multiprocessing.cpu_count
self.display_surface.flip
os.path.realpath
tensorflow.keras.layers.Dense
rl.memory.SequentialMemory
self.create_card_deck
logging.handlers.RotatingFileHandler.setFormatter
open
configparser.ConfigParser
input
multiplier.self.sorted_5flushcards.sum
os.path.join
self.dqn.test
pyglet.text.Label.draw
filename.replace.replace
self.counts.sum
numpy.cos
table_card_list.append
winner_in_episodes.append
copy.copy.pop
agents.agent_keras_rl_dqn.Player
self.env.action_space.sample
math.cos
multiprocessing.pool.ThreadPool
known_player.append
SelfPlay.dqn_play_keras_rl
self.player_cycle.mark_raiser
CARD_RANKS_ORIGINAL.find
math.radians
self.get_opponent_allowed_cards_list
self._start_new_hand
self.deck.pop
tuple
player_cards.append
list
self.env.reset
self.distribute_cards_to_table
PlayerFinalCardsWithTableCards.index
sorted
tensorflow.keras.layers.Dropout
copy.copy
self._next_player
CustomConfigParser
self.get_counts
gym_env.env.Action
self.viewer.rectangle
math.sin
numpy.concatenate
PygletWindow
agents.agent_consider_equity.Player
configparser.ExtendedInterpolation
self._distribute_cards_to_table
set
numpy.logical_or
self.get_three_of_a_kind
logging.getLogger.debug
self.cards.np.arange.sum
numpy.any
self.dqn.fit
round
MonteCarlo.run_montecarlo
docopt.docopt
self._get_winner
distutils.core.setup
suits.count
self.set_args
strided
pyglet.graphics.draw
numpy.logical_not
CardsOnTable.append
self.display_surface.switch_to
self.player_cycle.get_potential_winners
tools.helper.flatten
self.update_alive
PlayerData
self.current_player.temp_stack.append
numpy.insert
numpy.random.choice
numpy.exp
logging.StreamHandler
self.distribute_cards_to_players
sd.__dict__.values
docopt.docopt.upper
SelfPlay.equity_vs_random
command_line_parser
logging.getLogger.addHandler
zip
pandas.DataFrame
random_player.append
PygletWindow.circle
setuptools.setup
CommunityData
self.winner_in_episodes.pd.Series.value_counts
tools.hand_evaluator.eval_best_hand
numpy.zeros
self.new_hand_reset
self.funds_history.reset_index
tensorflow.keras.callbacks.TensorBoard
self.get_kickers
winnerCardTypeList.append
self.funds_history.reset_index.plot
TrumpPolicy
get_dir
t.PlayerCardList.append
min
logging.handlers.RotatingFileHandler
tools.hand_evaluator.get_winner
Cython.Build.cythonize
pandas.concat
self.get_straight
ui_action_and_signals.signal_status.emit
tensorflow.keras.optimizers.Adam
get_config.getboolean
table_cards_numeric.append
flush_hand._.r.r.flush_hand.join.count.r.CARD_RANKS_ORIGINAL.find.items
rl.agents.DQNAgent
self._clean_up_pots
json.load
numpy.append
operator.itemgetter
self.create_card_deck.pop
player_cards_with_table_cards.append
readme.read
numpy.random.random
len
self.players.append
tensorflow.keras.models.Sequential
tools.hand_evaluator.CARD_RANKS_ORIGINAL.find
tableCardList.append
numpy.amax
time.strftime
L.get_collusion_cards
self.deactivate_current
numpy.sin
agents.agent_random.Player
self.load
self.get_pair_score
enumerate
numpy.isin
self.stage_data.sd.sd.__dict__.values.flatten.list.np.array.flatten
MyWinnerMask.Winners.all
self.log.info
self.get_flush
tools.helper.get_config
self.player_cycle.mark_bb
PygletWindow.reset
gym_env.env.PlayerShell
agents.agent_keras_rl_dqn.Player.play
self.player_status.append
self.display_surface.dispatch_events
os.path.relpath
self.env.add_player
self._initiate_round
self.get_fullhouse
PlayerFinalCardsWithTableCards.append
gym.make
SelfPlay.equity_self_improvement
all_players.append
range
self.reset
numpy.arange.self.suits.sum
self._illegal_move
self.model.to_json
self.player_cycle.new_hand_reset
numpy.clip
getattr
pyglet.gl.glColor4f
numpy.arange
get_config.getint
logging.StreamHandler.setLevel
_keys_to_tuple
self.dqn.save_weights
gym.make.seed
self.winner_in_episodes.append
flush_hand.join.count
PygletWindow.text
logging.getLogger
Evaluation
self.viewer.update
self.get_equity
self.get_two_pair_score
q_values.astype.astype
RuntimeError
self._execute_step
gym.make.reset

@developer
Could please help me check this issue?
May I pull a request to fix it?
Thank you very much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant