Beefy Boxes and Bandwidth Generously Provided by pair Networks
Perl: the Markov chain saw
 
PerlMonks  

Re^6: Perl Contempt in My Workplace

by vkon (Curate)
on May 28, 2021 at 08:12 UTC ( #11133205=note: print w/replies, xml ) Need Help??


in reply to Re^5: Perl Contempt in My Workplace
in thread Perl Contempt in My Workplace

with all the respect - why inflexible?

I've read your post Re^6: Perl Contempt in My Workplace carefully - what "inflexible" I made?
In that reply you've stated that you only return JSONs but JSONs are not possible for 10_000_000 records.

Replies are listed 'Best First'.
Re^7: Perl Contempt in My Workplace
by hippo (Bishop) on May 28, 2021 at 09:43 UTC
    JSONs are not possible for 10_000_000 records

    JSON is a data format. It has no limitations on the size of the data.


    🦛

      Technically, yes. But nearly all JSON parsers i've seen are designed to slurp in everything all at once and turn it into a in-memory data structure. So for very large files, you might (or might not) have to cobble together a custom parser that can do a stream-as-you-go approach.

      Of course, that's where Perl comes into its own. Munching insanely huge text files is what it was designed for in the first place ;-)

      perl -e 'use Crypt::Digest::SHA256 qw[sha256_hex]; print substr(sha256_hex("the Answer To Life, The Universe And Everything"), 6, 2), "\n";'

        You're an oracle man, right? Oracle has JSON_TABLE to alleviate the JSON weirdness, I think.

        Postgres has the same functionality, but not yet those JSON_TABLE API/functions (that the SQL Standard prescribes). That JSON_TABLE work (for postgres) is largely done, although not yet committed. Somewhat understandably, there seems to be a lack of interest: most DBAs look down a bit upon the strangeness of JSON data type, and prefer tables of more conventional data types. I take it that in the oracle-world there is the same reluctance towards this encroachment of NoSQL-y types. I feel the same reluctance myself.

      A reply falls below the community's threshold of quality. You may see it by logging in.
[OT] Re^7: Perl Contempt in My Workplace
by erix (Prior) on May 28, 2021 at 10:53 UTC

    JSONs are not possible for 10_000_000 records

    As hippo says, JSON is just a data format.

    Let's give it a try, testing with some generated md5 values:

    testdb=# select count(*) from vkon_json; count ---------- 10000000 (1 row) testdb=# select * from vkon_json limit 3; js | id --------------------------------------------+---- {"f1": "c4ca4238a0b923820dcc509a6f75849b"} | 1 {"f1": "c81e728d9d4c2f636f067f89cc14862c"} | 2 {"f1": "eccbc87e4b5ce2fe28308fd9f2a7baf3"} | 3 (3 rows) -- retrieve: testdb=# select * from vkon_json where js @> '{"f1": "d1ca3aaf52b41acd68ebb3bf69079bd1"}'; js | id --------------------------------------------+---------- {"f1": "d1ca3aaf52b41acd68ebb3bf69079bd1"} | 10000000 (1 row) Time: 0.679 ms

    Less than half a millisecond. What do you think? Possible?

    (I could have put everything into a 1-column, 1-row table but that just seems too dumb. Possible, though, and will perform just as fast)

    A reply falls below the community's threshold of quality. You may see it by logging in.
Re^7: Perl Contempt in My Workplace
by marto (Cardinal) on May 28, 2021 at 12:46 UTC

    Inflexible in that you make claims that have no basis in reality, besides your own stance e.g. Re^9: Perl Contempt in My Workplace, and keep asserting the same flawed responses, despite previous corrections, as demonstrated above.

      My own recent experience of a problem do have basis in reality, because this is what have happened in my recent experience.

      I assume that you do not think that what I said was not true (because I haven't lied)

      I honestly tried to find an acceptable solution for me in Perl but decided to find elsewhere because Perl solution seemed to me incomplete.
      It could be that my decision to switch to another solution was wrong - I can accept that I have weak search-fu or my intuition failed this time.

      2 questions to you:

      • this above statement - is it also flawed? if so - in what way it flawed?
      • the more important question:
        can you point me to a ready Perl solution for single large SQL table on server with datatables frontend with Excel-like filtering and searches?
        I will switch to it immediately.

        Your "experience" is the outcome of your behaviour. You didn't bother reading the front page, or any seemingly any other documentation for the tool you choose to use. You continually reiterate false information, regardless of how many different people correct you. You make a baseless stance regarding version numbers, and refuse to accept alternative viewpoints/reality. The method to write some code to achieve what you want has been described already, and here you are asking for someone else to do it for you. Throw into the mix contradictory statements.... inflexible: unwilling to change or compromise..

        A reply falls below the community's threshold of quality. You may see it by logging in.

Log In?
Username:
Password:

What's my password?
Create A New User
Domain Nodelet?
Node Status?
node history
Node Type: note [id://11133205]
help
Chatterbox?
and the web crawler heard nothing...

How do I use this? | Other CB clients
Other Users?
Others browsing the Monastery: (3)
As of 2021-10-26 18:53 GMT
Sections?
Information?
Find Nodes?
Leftovers?
    Voting Booth?
    My first memorable Perl project was:







    Results (90 votes). Check out past polls.

    Notices?