Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixed NullPointer while creating temporary function #1384

Open
wants to merge 2,911 commits into
base: master
Choose a base branch
from

Conversation

ashishkshukla
Copy link

Changes proposed in this pull request

NullPointer issue has been fixed while creating temporary functions in this pull.
Issue can be referred here #1383

Patch testing

manual testing

ReleaseNotes.txt changes

NA

Other PRs

NA

hemanthmeka and others added 30 commits October 30, 2018 17:08
* adding commandline arg to kill on OOM

* checking if system is 64 bit to apply right libgemfirexd<32/64>.so

* adding agent in case of mac/linux

* enable copy in case of mac also

* implementing review suggestions

* checking if agent load would be successful else continue without agent

* implementing review suggestions

* undo-ing wrong changes to gradle.properties

* syncing store
Reset the pool at the end of collect to avoid spillover of lowlatency pool setting to later operations that
may not use the CachedDataFrame execution paths.
For cases like MacOSX that ships with bash 3.x by default
…directory structure in order to avoid the confusion with PID of hydra JVM Vs PID of snappy node

- Correcting table names in configuration file required for dmlOps tests

Testing Done: Verified the changes by running few tests from HA and non-HA bts
- avoid unncessary re-evaluation of cluster properties target
- redirect error of spark-shell in testSparkShell to output string for checks too
- update store and spark links
- remove distZip from default assemble target
…#1194)

* ignoring failing tests

* removing wrongly created test suite

* formatting changes
…the server.

Because of the previous order there were chances that this function came for
execution on this node even before it is registered.
…ere was an issue with running snappy job with --packages option ouside the snappydata build directory.
…#1195)

Key-based aggregations (GROUP BY) already handle copying of incoming value
but was missing in non-key flat aggregations.

## Changes proposed in this pull request

- refactored value copy code in ObjectHashMapAccessor and string clone code (only if required)
- use above for non-key aggregations too
- renamed io.snappydata.implicits to io.snappydata.sql.implicits
- handle null values during clone/copy of non-primitive aggregate results
Also corrected the copyright headers to be SnappyData ones.
Changed the year from 2017 to 2018 in license headers.
…alCatalog

- for paths from SnappyExternalCatalog (e.g. "CREATE VIEW"), the catalog cache needs
  to be cleared too; added unit test for this too
- fixing some occasional failures due to test issues
- renamed SnappyTableStatsProviderService.suspendCacheInvalidation to TEST_SUSPEND_CACHE_INVALIDATION
  to indicate clearly it is meant only for tests
When using SnappySession, the default, the temporary hive configuration passed
is not just used by HiveServer2, but also overrides the internal hive
configuration used by SnappyStoreHiveCatalog causing problems.

Now using SnappySession hive configuration after adding the "hive.server2"
configuration read from the temporary "executionHive" client (that in turn
    will set it up using hive-site.xml etc that are ignored by SnappySession's
    hive meta-store client)

Also reduced logging during the temporary hive client initialization.
- new "snappydata.sql.hiveCompatible" to turn some SQL output to be more hive compatible;
  currently this includes "show tables ..." variants that have only one "name" column in output
- added unit tests for above property and "show tables ..." variants
.jvmkill*.log files. .hprof can be pulled by passing an option '-m'
or '--hprofdump' to the script.
Hot Fix Changes
Updates to backlog doc items
New Spark Extension API Guide
Jar published for snappydata-jdbc is shadow one that includes all dependencies
dshirish and others added 13 commits July 26, 2019 20:28
* Code changes for SNAP-2779 and SNAP-1338:
 - Adding Redundancy column in Tables List to view count of redundant copies.
 - Adding Redundancy Status column in Tables List to monitor redundancy has been satisfied or broken.
 - Changes for maintaining Redundancy and isRedundancyImpaired details for count of redundant copies and redundancy satisfaction/broken status.
 - Display Redundancy as 'NA' if distribution type is REPLICATE.
 - Display buckets count in Red colour, if any of the buckets is offline.
* changes to tackle insufficient disk space issue in transaction

* fixed the test failure. Apparently in some cases, the table name present in ColumnarStore is in lower case, causing region not found exception. Fix is to upper case the table name. Not debugged why for some partitions the table name is coming in lower case
* Take region lock on bulk write ops in column table, in case of smart connector use a connection to execute procedure on a server to take the lock and release the lock using same connection when the operation is over
* added removeTableUnsafeIfExists to drop a catalog table in inconstent state

* adding test for DROP_CATALOG_TABLE_UNSAFE procedure

* worked on review comments

* review comment changes

* enhancements to REMOVE_METASTORE_ENTRY

* fixing test for SNAP-3055

* review changes incorporated

* review changes

* removing unnecessary handling of exception
)

* Added automated test for ODBC FailOver functionality
* Mask credentials (in case of s3 URI) in Describe extended/formatted output.
* Mask credentials in case of s3 on UI for external tables.
* Disallow access non-admin user to the tables in SNAPPY_HIVE_METASTORE.
This adds support for the two components of Spark's hive session:

1) catalog that reads from external hive meta-store using an extra hive-enabled SparkSession
2) HiveSessionState from the hive-enabled SparkSession that adds additional resolution rules
    and strategies for such hive managed tables
3) Parser changes to delegate to Spark Parser for Hive DDL extensions. A special format for
    "CREATE TABLE ... USING hive" is allowed that explicitly specifies the table to use hive provider.

There are two user-level properties:

- Standard "spark.sql.catalogImplementation" that will consult external hive metastore in addition
  to the builtin catalog when the value is set to "hive". Note that first builtin catalog is used and then
  the external one, so in case of name clashes, the builtin one is given preference. For writes,
  all tables using "hive" as the provider will use the external hive metastore while rest use builtin.
- "snappydata.sql.hiveCompatibility" can be set to default/spark/full. When set to "spark" or "full"
  then the default behaviour of "create table ..." without any USING provider and any Hive DDL
  extensions will change to create a hive table instead of a row table.

A lazily instantiated instance of Hive-enabled SparkSession is kept inside SnappySessionState
which gets referred if the "spark.sql.catalogImplementation" is "hive" for the session.

For 1), the list/get/create methods in SnappySessionCatalog have been overridden to read/write to
the hive catalog after the snappy catalog if hive support is enabled on the session.

For 2), wrapper Rule/Strategy classes have been added that wrap the extra rules/strategies from
hive session and run them only if the property has been enabled on SnappySession.

The code temporarily switches to the hive-enabled SparkSession when running hive
rules/strategies some of which expect the internal sharedState/sessionState to be those of hive.

Honour spark.sql.sources.default for default data source: if spark.sql.sources.default is explicitly
set then use the same in SQL parser with default as 'row' like before

Initial code for porting hive suite

Fix for SNAP-3100: make the behaviour of "drop schema" and "drop database" as identical to
drop from both builtin and external catalog since "create schema" is identical to "create database"

Fixes for schema/database handling and improved help messages

Improved CommandLineToolsSuite to not print failed output to screen
* Added code changes for SNAP-2772

* Added code changes for undeploying packages/jars from servers side.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet