Coder Social home page Coder Social logo

deanishe / alfred-workflow Goto Github PK

View Code? Open in Web Editor NEW
3.0K 59.0 230.0 24.82 MB

Full-featured library for writing Alfred 3 & 4 workflows

Home Page: https://www.deanishe.net/alfred-workflow/

License: Other

Python 77.42% Shell 1.69% HTML 20.87% Rich Text Format 0.02%
alfred-workflow alfred python keychain http caching alfred-3 fuzzy-search alfred3 workflow

alfred-workflow's Introduction

Alfred-Workflow logo

Alfred-Workflow

A helper library in Python for authors of workflows for Alfred 3 and 4.

Build Status Coverage Status Development Status Latest Version Supported Python Versions

Supports Alfred 3 and Alfred 4 on macOS 10.7+ (Python 2.7).

Alfred-Workflow takes the grunt work out of writing a workflow by giving you the tools to create a fast and featureful Alfred workflow from an API, application or library in minutes.

Always supports all current Alfred features.

Features

  • Auto-saved settings API for your workflow
  • Super-simple data caching with expiry
  • Fuzzy filtering (with smart diacritic folding)
  • Keychain support for secure storage of passwords, API keys etc.
  • Lightweight web API with Requests-like interface
  • Background tasks to keep your workflow responsive
  • Simple generation of Alfred JSON feedback
  • Full support of Alfred's AppleScript/JXA API
  • Catches and logs workflow errors for easier development and support
  • "Magic" arguments to help development/debugging
  • Unicode support
  • Pre-configured logging
  • Automatically check for workflow updates via GitHub releases
  • Post notifications via Notification Center

Alfred 4+ features

  • Advanced modifiers
  • Alfred 4-only updates (won't break older Alfred installs)

Contents

Installation

Note: If you're new to Alfred workflows, check out the tutorial in the docs.

With pip

You can install Alfred-Workflow directly into your workflow with:

# from your workflow directory
pip install --target=. Alfred-Workflow

You can install any other library available on the Cheese Shop the same way. See the pip documentation for more information.

It is highly advisable to bundle all your workflow's dependencies with your workflow in this way. That way, it will "just work".

From source

  1. Download the alfred-workflow-X.X.X.zip from the GitHub releases page.
  2. Extract the ZIP archive and place the workflow directory in the root folder of your workflow (where info.plist is).

Your workflow should look something like this:

Your Workflow/
    info.plist
    icon.png
    workflow/
        __init__.py
        background.py
        notify.py
        Notify.tgz
        update.py
        version
        web.py
        workflow.py
    yourscript.py
    etc.

Alternatively, you can clone/download the Alfred-Workflow repository and copy the workflow subdirectory to your workflow's root directory.

Usage

A few examples of how to use Alfred-Workflow.

Workflow script skeleton

Set up your workflow scripts as follows (if you wish to use the built-in error handling or sys.path modification):

#!/usr/bin/python
# encoding: utf-8

import sys

# Workflow3 supports Alfred 3's new features. The `Workflow` class
# is also compatible with Alfred 2.
from workflow import Workflow3


def main(wf):
    # The Workflow3 instance will be passed to the function
    # you call from `Workflow3.run`.
    # Not super useful, as the `wf` object created in
    # the `if __name__ ...` clause below is global...
    #
    # Your imports go here if you want to catch import errors, which
    # is not a bad idea, or if the modules/packages are in a directory
    # added via `Workflow3(libraries=...)`
    import somemodule
    import anothermodule

    # Get args from Workflow3, already in normalized Unicode.
    # This is also necessary for "magic" arguments to work.
    args = wf.args

    # Do stuff here ...

    # Add an item to Alfred feedback
    wf.add_item(u'Item title', u'Item subtitle')

    # Send output to Alfred. You can only call this once.
    # Well, you *can* call it multiple times, but subsequent calls
    # are ignored (otherwise the JSON sent to Alfred would be invalid).
    wf.send_feedback()


if __name__ == '__main__':
    # Create a global `Workflow3` object
    wf = Workflow3()
    # Call your entry function via `Workflow3.run()` to enable its
    # helper functions, like exception catching, ARGV normalization,
    # magic arguments etc.
    sys.exit(wf.run(main))

Examples

Cache data for 30 seconds:

def get_web_data():
    return web.get('http://www.example.com').json()

def main(wf):
    # Save data from `get_web_data` for 30 seconds under
    # the key ``example``
    data = wf.cached_data('example', get_web_data, max_age=30)
    for datum in data:
        wf.add_item(datum['title'], datum['author'])

    wf.send_feedback()

Web

Grab data from a JSON web API:

data = web.get('http://www.example.com/api/1/stuff').json()

Post a form:

r = web.post('http://www.example.com/',
             data={'artist': 'Tom Jones', 'song': "It's not unusual"})

Upload a file:

files = {'fieldname' : {'filename': "It's not unusual.mp3",
                        'content': open("It's not unusual.mp3", 'rb').read()}
}
r = web.post('http://www.example.com/upload/', files=files)

WARNING: As this module is based on Python 2's standard HTTP libraries, on old versions of OS X/Python, it does not validate SSL certificates when making HTTPS connections. If your workflow uses sensitive passwords/API keys, you should strongly consider using the requests library upon which the web.py API is based.

Keychain access

Save password:

wf = Workflow()
wf.save_password('name of account', 'password1lolz')

Retrieve password:

wf = Workflow()
wf.get_password('name of account')

Documentation

The full documentation, including API docs and a tutorial, can be found at deanishe.net.

Dash docset

The documentation is also available as a Dash docset.

Licensing, thanks

The code and the documentation are released under the MIT and Creative Commons Attribution-NonCommercial licences respectively. See LICENCE.txt for details.

The documentation was generated using Sphinx and a modified version of the Alabaster theme by bitprophet.

Many of the cooler ideas in Alfred-Workflow were inspired by Alfred2-Ruby-Template by Zhaocai.

The Keychain parser was based on Python-Keyring by Jason R. Coombs.

Contributing

Adding a workflow to the list

If you want to add a workflow to the list of workflows using Alfred-Workflow, don't add it to the docs! The list is machine-generated from Packal.org and the library_workflows.tsv file. If your workflow is available on Packal, it will be added on the next update. If not, please add it to library_workflows.tsv, and submit a corresponding pull request.

The list is not auto-updated, so if you've released a workflow and are keen to see it in this list, please open an issue asking me to update the list.

Bug reports, pull requests

Please see the documentation.

Contributors

Workflows using Alfred-Workflow

Here is a list of some of the many workflows based on Alfred-Workflow.

alfred-workflow's People

Contributors

deanishe avatar ecbrodie avatar ecmadao avatar fniephaus avatar idpaterson avatar jag-k avatar janclarin avatar notabene01 avatar owenwater avatar terryx-lee avatar zhuozhiyongde avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

alfred-workflow's Issues

workflow.filter and non unique search keys

I've noticed that filter method will drop (actually overwrite) all items with same search key except the last one.

for example:

import workflow

def key(item):
    return item['name']

items = [{'name':'x','attr':'y'}, {'name':'x','attr':'z'}]

wf = workflow.Workflow()

print wf.filter('x', items, key, ) #[{'name': 'x', 'attr': 'z'}]

In my use-case both results are actually valid (think file names for example).

To "fix" this issue, it's probably best to attach some kind of unique ID for every item, this way it will assure unique results keys.

Would you mind if I submit a patch that will add additional flag to the filter method that will allow duplicated results output?

Pickling cache?

Is there a performance reason that you cache data using Pickle serialization as opposed to JSON serialization? I, for one, find the human-readability of JSON reason enough to prefer it to Pickle. And, from what little I've read, there aren't major speed differences. So, I'm curious as to the choice?

Punctuation in ASCII_REPLACEMENTS

We need to add some Unicode punctuations to the ASCII_REPLACEMENTS dictionary. In my data, I have already (after a very brief search) found these punctuations popping-up multiple times:

'—': '-',
'–': '-',
'’': "'",

I'm not certain how extensively we should cover punctuation, but there is nothing there now.

Also, I would find it more aesthetically pleasing to move that dictionary to a separate file in the workflow dir and import it. Every time I open workflow.py, I have to scroll for like 3 seconds just to get the the meat.

I can add a pull request, but I wanted to get feedback first.

Add `utils.py` script

I have found that I add a utils script to basically every workflow I write, and the basic functions get better each time, but that improvement takes forever to find its way back to the older workflows. Given the popularity of Alfred-Workflow, adding a collection of functions for mundane but often encountered tasks could prove very helpful, as it would remove possible error from workflow authors as well as ensure that best practices are being kept.

Here are some of the functions that can be found in my utils.py scripts that I think could prove more widely helpful:

  • get_clipboard()
  • set_clipboard(data)
  • read_path(path)
  • write_path(data, path)
  • append_path(data, path)
  • read_json(path)
  • write_json(data, path)
  • run_filter(trigger, arg)
  • run_alfred(query)
  • run_applescript(scpt)
  • applescriptify(data)
  • to_unicode(obj)
  • to_bool(text)
  • check_query(query)
  • strip(obj)

I'm sure we could think of many others. My thinking here is simply that there are a number of things that I do all of the time, and so I want to simplify my code and place that functionality in functions. For common things, why not have this come with Alfred-Workflow to grant a clean API (one of the chief "selling points" of Alfred-Workflow) for these tasks?

Turn off update checks

Should it be possible for a user to turn off update checks regardless of what the workflow author has configured?

Although no information is ever sent to GitHub, it's still a "thing", imo.

This would almost certainly be a workflow:noupdate magic arg that disables any checks for updates.

Personally, I think it's the "right" thing to do.

Any thoughts @fniephaus, @smargh?

FTS filtering

I thought it would be cleaner to discuss this feature and its implementation details in a separate issue.

The way I see it, there are a few distinct issues that need to be discussed:

  • how does the filter() API accommodate both the FTS backend and the current iterating backend?
  • what is the minimal amount of functionality for the FTS searching for v2 of Alfred-Workflow?
  • what is the FTS searching API?

Once these are settled, we can focus solely on making the FTS functionality rock-solid.

Here are my initial thoughts:

  1. add a protocol param which accepts either "fts" or "iter" (or whatever you want to call the current implementation).
  2. I say the current 3 flags (and 4th for the combination) is sufficient. If wf author really wants initial based filtering, he has the other protocol.
  3. I'm currently using a class (FTSDatabase) to represent the sqlite database. You create the database, its table, its columns, its tokenizer, etc and then search that database by interfacing with this class. I think having this class, but keeping it clean, simple, and low-level is best. Create the various query types in the filter() (or some sub-function), structure the data in filter(), etc. All FTSDatabase does is create an indexed database and allow you to search that data.

PS. I'm not certain what commit you were looking at, but database creation does use sqlite's default parameter substitution (see here). I did tho just fix the searching so that the query uses it as well.

Others thoughts?

Require `version` file?

@fniephaus @smargh What do you think about requiring a version file? Simply a file in the workflow root called version that contains the version string.

It's not an enormous imposition on workflow authors and would strongly encourage best practices.

There would be a corresponding workflow:version magic arg.

I think it would be beneficial in a few regards:

  • It'd be easier for workflow authors to debug problems because it's easy for users to tell them which version they're using.
  • No need to specify the version number in multiple places if you use multiple mechanisms for updating (DRY rules).
  • It's simply best practice, and Alfred doesn't look like enforcing the specification of a version number.

My suggestion would be to insist on at least semi-semantic versioning, and not silly bollocks like the "Aries", "Taurus" versions that the Alfred Dependency Bundler uses.

Clean the cache file

I'm writing a workflow that will generate lots of cache files which will expires soon. So I need a function to clean the expired cache automatically. Here is my basic idea.

  1. Clean Cache function def clean_cache(self, expire_age, include, exclude)
    • expire_age Only the cache file whose age is older than expire_age will be deleted
    • include regular expression, only the matched cache file could be deleted.
      Default value: "*." + self.cache_serializer
    • exclude same as include but the matched file won't be deleted.
      Default value: None
    • If a cache file matched both include and exclude, it shall not be deleted.
  2. Call clean_cache() automatically. There are multiple solutions.
    • Create cron job when the first time workflow is executed.
      • Need first_run flag
      • Personal compute may shutdown everyday.
    • Create last_cleanup_time flag in data directory.
      • Potentially conflict with developed's own flag. Workaround: custom flag name.
    • Leave it to developer.
      • Developer shall make the decision of how to calli clean_cache().

Any thought about it?

Options for workflow.filter

I've moved all of ZotQuery's various queries to Workflow's filter method (which is quite ingenious. Reading thru the code has taught me quite a few things), and I've had some thoughts for possible improvements.

As it stands, filter captures just about anything that it can (call this "loose results"). I personally would like some more fine-grained control over it scope (make it return stricter results). I would imagine this would prove useful for other workflows as well.

What I propose is twofold (and I could help with the code if you want):

First, Provide parameter score_limit which sets a min for results. For example, if I run the query "horace" thru my Zotero db using filter, I get 200 results. Many (66) have a score larger than 30, but a vast majority have a score lower than that (144). Of those, most (138) have a score lower than 5! I definitely don't won't those bottom 138, and I probably wouldn't even want the bottom 144 (score < 30) to return results in ZotQuery. Looking at your code, it would appear that changing line 700 in workflow.py (I have the most recent commit version) from:

if score > 0:

to

if score > score_limit:

would make this viable. Obviously, you would then only need to define the function as:

def filter(self, query, items, key=lambda x: x, score_limit=0, ascending=False,
               include_score=False):

So that is one way to allow for more control on the returned results.

Second, I think it would also prove beneficial to provide a parameter (rules ?) that accepts a list of rules for matching. As it stands now, filter uses these rules
rules = ['startswith', 'capitals', 'atom', 'initials:startswith', 'initials:contains', 'substring', 'allchars']
For ZotQuery, I don't imagine users would really every query with the initials of a title. So, I would like to block results that would get thru via those rules (capitals and both initials). Off the top of my head, I don't have a quick and easy method for adding this feature, so maybe it would prove too difficult, but in general, I would suggest allowing for more flexibility in determining the scope of returned results, on a spectrum from loose to strict.

Again, great stuff tho.

The big V2 refactoring and API thread

A significant refactoring of Alfred-Workflow is long overdue. The Workflow class has grown very large and the unittests are a horror show, largely due to the necessity of a workflow-like environment to run them in.

I'd love to hear your ideas on how the library should be refactored and how the API should be changed.

My thoughts so far:

Refactoring

  • Move all the code that requires a workflow environment (info.plist or version files, Alfred environmental variables) to its own module(s).
  • Split the different feature groups of Workflow into separate modules (data storage and cache functions, settings, filtering, text processing, serialisation etc.).
  • Generalise the update API, so other backends (e.g. RSS, Packal) can be added easily.
  • Add lots of mocking to unittests, especially to return canned HTTP responses and avoid hitting web APIs.

API changes

  • Change default max_age to 0 in Workflow.cached_data(), so cached data is always returned regardless of age by default.
  • Move version from update_settings dict to a Workflow argument (as an alternative to version file).
  • (Possibly) allow filter() to be called with an empty query, whereupon it will return all items. This would mirror the way the built-in Python filter() function works, and allow @smargh to not bother checking whether query is empty.

Paging @smargh @fniephaus @owenwater 😀

Cocoa and Core Foundation Unicode issues

I was going to just make a simple pull request, but I wanted to ensure my thinking is sound before I do. In making metadata, I came across a new Unicode issue. Workflow.decode did nothing with Unicode text returned by subprocess when running the mdls command. I couldn't figure out what was going on, so I posted on StackExchange. As per usual, I got a great answer pretty quickly. The basic issue is that Cocoa and Core Foundation represent Unicode using \U with 4 digits for BMP characters, because ObjC (unlike C, C++, and Python) allows \U with 1-8 digits instead of exactly 8 (and \u with 1-4). In order to get the text into good Python Unicode form, I needed to filter the text like so:

if '\\U' in stdout:
    stdout.replace('\\U', '\\u').decode('unicode-escape')

I think this would be a helpful addition to Workflow.decode. It appears that this is an edge case (getting Unicode data from Cocoa or Core Foundation scripts), but it is a gnarly one at that. The if clause would ensure that this is only run when necessary, but it would make decode handle this edge case well.

The new decode would look like this:

def decode(text, encoding='utf-8', normalization='NFC'):
    # convert string to Unicode
    if isinstance(text, basestring):
        if not isinstance(text, unicode):
            text = unicode(text, encoding)
    # decode Cocoa/CoreFoundation Unicode to Python Unicode
    if '\\U' in text:
        text = text.replace('\\U', '\\u').decode('unicode-escape')
    return unicodedata.normalize(normalization, text)

Thoughts? Should I make the pull request as is?

Version 2 reorganisation

Alfred-Workflow is approaching version 2.0, at least numerically speaking.

The main module, workflow.py, has grown to >2000 lines and recent additions have used a more dynamic approach (serialisers, potentially updaters).

Would it make sense to restructure Workflow so that it uses pluggable hooks to perform things like, e.g. filtering?

Is there a more abstract, plugin-based model that would make Alfred-Workflow more useful and powerful?

Feature Request: wf.local

I think adding a method to access a workflow's root folder would prove handy. I know alp had this, and I use it a fair bit. Alongside wf.cache and wf.data, this would account for all 3 major workflow related folders. Of course, a corresponding method for wf.localfile(filename) would be helpful as well.

`filter` score

ZotQuery was having problems w/ the filter method not returning items that should match. For example, the value string would be

"Isocrates and the Rhetoric to Alexander: Meaning and Uses of Tekmerion Noel Rhetorica: A Journal of the History of Rhetoric"
and the query would be
"noel"

The item with that value string would not be returned. I fiddled around with the filter method for a while and eventually found that the issue was in the score computation. With a value like the one above, the computation

(len(value) - len(query))
would often be > 100, which would yield a negative score, which would exclude that item from the results.

As you've said, you're not being overly precise with the scoring, but this is a bug for larger value strings. I've altered the code to compute score using division rather than subtraction, to ensure a number < 100, to keep all items in the results that match on a certain rule.

If this makes sense, I can make a Pull request.

Update via Packal

So, I've looked over @fniephaus' update addition to Alfred-Workflow, and I like it a lot. For me, however, it would add just another additional step to my workflow publishing. Now, I would need to

  1. upload on Packal
  2. release on GitHub
  3. post on Alfred

Now, I do try to keep my GitHub repos for my workflows up-to-date, but I'm not always on top of that. Plus, I don't currently "release" new versions via GitHub. That's what I use Packal for. So, I would love to have this functionality work with Packal as a backend, not the workflow's GitHub releases.

I am working on code to do just this; however, I am uncertain how best to integrate it into Alfred-Workflow. Obviously, I think but auto-update backends should be supported, but what's the simplest way to accomidate both? I was thinking, introduce classes to update.py. Have a GitHub class with @fniephaus' code and a Packal class with my code. Then add another argument to update_info when initializing a workflow, maybe backend? We would then also have to abstract away from the github_slug arg, since the Packal code requires the workflow's bundle id. But since Alfred-Workflow can get that automatically, maybe make github_slug only required if backend=github is specified.

Basically, I want some community input on how to create a robust auto-updating feature that allows for multiple backends. You can see my initial mirror of @fniephaus' update.py in this gist.

Thoughts?

Compatibility of the library with OS X 10.7.5

I've been trying to make the Python-written workflow alfred-pocket work on a Mac running Lion (10.7.5). However I am not raising an issue about the workflow per se, but about the alfred-workflow library it depends on. A solution could help for other Python-written Alfred workflows that use this library and need to run on Lion. Needless to say, the machine I use cannot be upgraded beyond Lion.
So in my specific case, this workflow wasn't supposed to work on Lion but I wanted to know why it should not. In the workflow's debug window, I found out that the error I came across was caused by the following lines (1893-1894) in the workflow.py file of the library

retcode, password = self._call_security('find-generic-password',
                                        service, account, '-w')

In Lion, the -w option actually doesn't exist for the security find-generic-password command.
So this means that the alfred-workflow library is not fully compatible with OS X versions prior to 10.8.
In my case, I just replaced -w by the -g option in workflow.py, i.e.

retcode, password = self._call_security('find-generic-password',
                                         service, account, '-g')

This makes the error disappear.
However, the workflow isn't still working (although it works the first time, it crashes later on), as other authentication errors show up. It is close to work though.
I would like to know whether other tweaks in the source code of the library might be needed to make the library (not the workflow) fully compatible with Lion (so that I can move on to the workflow specific code in a second step).
Thanks

Self-initializing workflow data

I've recently started thinking about how I could abstract a technique I've started using in my most recent re-write of ZotQuery and Pandoctor: self-initializing data accessible via object properties. Let me explain what I mean by that. Both ZotQuery and Pandoctor are Alfred GUIs for some other application or utility which generates and uses alot of non-volatile data. For example, with ZotQuery, I am pulling from a user's Zotero database data. In order to increase speed, I have written ZotQuery and Pandoctor in such a way as to get the data I need in the format I need, and then write it to disk in the workflow's non-volatile directory (from which all workflow functions then read accordingly). So, this data is non-volatile, essential, and not settings. Also, the workflow will need this data all of the time. So, what of "self-initializing data"? My current approach is to get this data with the same consistent API each time, regardless of whether or not it exists. I do this by creating an Object Class with some introspection. Here's some of my ZotQuery code as an example:

class Zotero(object):
    """Contains all relevant information about user's Zotero installation.

    :param wf: a new :class:`Workflow` instance.
    :type wf: :class:`object`

    """
    def __init__(self, wf):
        self.wf = wf
        self.me = self.paths()

    def paths(self):
        """Dictionary of paths to relevant Zotero data:
            ==================      ============================================
            Key                     Description
            ==================      ============================================
            `original_sqlite`       Zotero's internal sqlite database
            `internal_storage`      Zotero's internal storage directory
            `external_storage`      Zotero's external directory for attachments

        Expects information to be stored in :file:`zotero_paths.json`.
        If file does not exist, it creates and stores dictionary.

        :returns: key Zotero paths
        :rtype: :class:`dict`

        """
        zotero_paths = self.wf.stored_data('zotero_paths')
        if zotero_paths == None:
            paths = {
                'original_sqlite': self.original_sqlite,
                'internal_storage': self.internal_storage,
                'external_storage': self.external_storage
            }
            self.wf.store_data('zotero_paths', paths,
                                    serializer='json')
            zotero_paths = paths
        return zotero_paths

    @property
    def original_sqlite(self):
        """Return path to Zotero's internal sqlite database.

        Expects information in :file:`zotero_paths.json`.
        If file doesn't exist, it finds path manually.

        :returns: full path to file
        :rtype: :class:`unicode`

        """
        try:
            return self.me['original_sqlite']
        except AttributeError:
            sqlites = self.find_name('zotero.sqlite')
            return sqlites[0] if sqlites != [] else None

The basic set-up is this: on initialization, run the paths() method and set the me attribute to the dictionary data returned. When paths() runs, it checks to see if the dictionary has been written to disk. If not, it generates the dictionary, writes it to disk, and then returns the dictionary (to be set to self.me). In generating the dictionary, it calls to object properties (like original_sqlite). These properties attempt to read the value from the dictionary (so that you can later access the property directly at read speed), but on first run this will raise an AttributeError, since the self.me variable isn't set yet. In this part of the code, you put whatever code you need in order to get the data required. In the example above, I use a wrapper for mdfind to find the full path to the user's Zotero sqlite database. The result is that I have all of the data I need access to initialized and written to disk on the first run, simply by creating an instance of the class. The idea here is basically the same of the idea behind the Alfred Bundler: have a consistent API where things are automatically fetched on first run and then read on all subsequent runs, although the API doesn't need to be aware of which run you are on.

I think alot of workflows have some data of this sort, and if we could abstract this to provide an API for this, that could be very helpful. My problem is that I'm not certain how exactly one would do this (if it's even possible). Generically, I'm thinking of dynamically populating a sub-class of Workflow (akin to Settings) with properties, and the class is structured in this way. Is there a property generating decorator?

Anyways, is this feasible or even desirable in your mind?

suggestion to lower default logging level and add magic argument to control it

Debug logging might be pretty expensive to be turned by default.
I know that plugin author can change it to whatever he wants but I wonder maybe the better approach is to set level to INFO and add new magic args to turn debug on/off?

This way debug logging isn't forced on every workflow by default, and it's still possible to debug the workflow if needed.

In any case if the custom magic arguments feature will go in, it will be possible to add this behavior by workflow author.

Info for workflow's main directories

I've recently updated my fork of alfred-workflow to allow for returning the key info for a workflow's main 3 directories (cache, data, root). I've added a "magic arg" (workflow:filterinfo) which calls a new method self.info_filter() which will get and format the relevant information for each directory. The format is:
title : "Info for wf.name's dir-type directory"
sub-title : "pretty-print size in n file(s) and n directory/ies"

I've fiddled with the code a good bit myself, and I need to add more descriptions, but I think this is a helpful addition. I personally added it because my new workflow EN-Wikify creates and stores files in the data directory. I like that the user can clean that directory out whenever using the appropriate magic arg, but I wanted a way for the user to check how big it was, both in raw size and files + dirs, before cleaning it out. This seems like an appropriate companion to the deleting magic args.

Anyways, before I push, I wanted to get your sage feedback.

Ascending key in `filter` lost [FIX]

In commit 5a22c11, you removed the ascending key from the filter() code, such that changing its value has no effect. I was trying really hard to rebase my fork, but your regression from @fniephaus's updating code really has me buggered. Since it is an easy fix, I thought I'd just post it. All you need to do is change line 1689 of workflow.py from results.sort(reverse=True) to results.sort(reverse=ascending).

New "magic" args

I think I remember seeing this somewhere as a comment in your code, so it may be already on your roadmap, but I think adding 3 new magic arguments could prove helpful:

  • open workflow's root directory
  • open workflow's data directory
  • open workflow's cache directory

I don't currently have specific use-cases, which might be reason enough to leave them out, but they seem helpful in the abstract.

Results of `filter()` when query is empty

Is it supposed to be the default behavior of filter() that no items are returned if the query is empty? If so, what is the reasoning? Also, we should add a flag to allow that to be flipped (show all results on empty query). This would allow workflows to show all items at the start of a filter (how I prefer my Script Filters to work, especially in Pandoctor). I'm fine if this is the default (if that was intentional), but the flag really is needed IMO.

Add `stored_data` and `store_data` methods

I think I want to add stored_data and store_data to Workflow class. These methods would exactly mirror cached_data and cache_data, but the data would be saved to the workflow's storage dir and no max_age would be required (i.e. max_age=0 is the permanent default).

However, I was wondering if there was some larger reason that you didn't originally put these methods in that would keep you from accepting a later pull-request?

Also, I would prefer to serialize the data into JSON, as mentioned in Issue 16.

Use Alfred workflows as CLI apps

So, I confess that this may, by definition, rest outside of the purview of Alfred-Workflow. But I also think that it would be a great addition to the underlying library, as opposed to forcing workflow authors to roll their own implementation.

I've been thinking alot lately about generalizing a few of my workflows to function as CLI apps as well as Alfred workflows. This is primarily to broaden their utility and appeal. While plenty of Alfred workflows can't be converted to CLI apps, many more can. Any workflow that merely utilizes Alfred's GUI and has its code in external files can function as a CLI app. So, for example, ZotQuery and LibGen from me would work easily as CLI apps.

Now, how would a workflow be agile enough to do both? Well, so far as I can think, the only real difference would need to be output formatting. Alfred wants specially formatted XML, a CLI app needs to give clearly formatted text. Other than that, everything under the covers would work the exact same (including the backend data storage stuff). Point: all that changes is the interface.

So, how could Alfred-Workflow play a part? First of all, it could intelligently format results depending on the calling environment. If Alfred calls the script, format as XML. If Terminal calls, format as text. How to tell if Alfred is calling? Are the alfred_env variables set? If yes, Alfred; if no, Terminal.

The only final issue would be to ensure $PATH will include the workflow script. My first thought is that the workflow would include a preference (either in a JSON file or via some configuration UI steps) to turn on the CLT. If that is turned on, Alfred-Workflow alters $PATH in the user's rc file (or something. This part I'm not so certain on).

Thoughts?

filter: matching whole query without splitting to words

Hello.

I have a use case where I want to filter a collection based on input like [a] -> a, in such way that the results that exactly match the query get a higher score. I ran into a problem where for example a query [a] -> a would give a higher score to result like a -> [a] even if there was an exact match in that collection for that query.

So I dug a bit into how the filtering works and noticed that query is split into words and each of those words are matched against the items. That is why it sees results [a] -> a and a -> [a] as equal because the query is matched against them word by word, i.e. ["[a]", "->", "a"]. I came up with a workaround that did not involve changing alfred-workflow's source, and replaced all spaces with underscores in the query and the "comparison key". This prevents the word splitting from happening and exact matches work.

Anyway, do you think it might make sense to have an option in the filter function to skip the word splitting?

Hope this made sense.

Empty XML elements and attributes: what do?

@fniephaus brought up an issue on the Alfred forum regarding Alfred's behaviour if autocomplete is an empty string. Turns out, it's an issue with Alfred-Workflow, which doesn't generate XML elements or attributes that are set to empty strings.

In most cases, Alfred behaves the same if an elem/attr is missing or set to an empty string. The exceptions are autocomplete and arg.

In the case of autocomplete, TABbing a result with an empty string resets the query to the keyword (which seems pretty useful).

With arg, however, an empty string results in Alfred calling the connected action with an empty {query}.

Is that a useful behaviour?

Wrapper for mdfind

TL;DR: I wrote one, it's here: https://github.com/smargh/metadata

Many moons ago, I saw this item on your TODO. Then, in writing ZotQuery and a couple other workflows, I found myself interfacing with mdfind often enough. Recently, I have started digging deeper into Python API design. I want to have a better sense of what makes a Python API good, how to build an API, how to use OOP to build an API, and how to publish a library to PyPi. So, I thought writing a wrapper library for OS X's metadata executables (mdfind and mdls) would be a great opportunity to kill 4 birds with one stone.

I've stopped and started a couple of times over the last couple of months, but I broke thru a couple of days ago, and now I have an initial release on GitHub. I need to go in and add all of the documentation (in code and the README), but you can see the API at work in the test.py file. Basically, I've implemented 3 classes (MDAttribute, MDComparison, and MDExpression) to represent the various units of mdfind's Query Expression Syntax. An MDAttribute object represents exactly that, a metadata attribute (like kMDItemFSName). The attributes.py module dynamically generates MDAttribute object to represent all Spotlight attributes on the user's system (using mdimport -A to get them). The naming of these objects aims to get them into Pythonic form, using this function:

def clean_key(self, key):
    uid = key.replace('kMDItemFS', '')\
             .replace('kMDItem', '')\
             .replace('kMD', '')\
             .replace('com_', '')\
             .replace(' ', '')
    return self.convert_camel(uid)

Thus, kMDItemFSName becomes name and kMDItemContentType becomes content_type. You can view all MDAttributes available in attributes.py via all. That is:

from metadata import attributes
print(sorted(attributes.all))

An MDComparison object is created whenever you compare an MDAttribute object to a predicate. The MDComparison makes use of Python's comparison magic methods to implement the API, so these are all MDComparison objects:

from metadata import attributes

attributes.name == 'blank'
attributes.user_tags != 'test'
attributes.creation_date > 'today'
attributes.creation_date < 'yesterday'
attributes.logical_size >= 1000
attributes.logical_size <= 1000

Note that only numeric and date attributes can use any of the greater or lesser comparisons.

If you want to see how the MDComparison object looks as a query string, use the <MDComparison>.format() method:

>>> (attributes.name == 'blank').format()
kMDItemFSName == "blank"cd
>>> (attributes.user_tags != 'test').format()
kMDItemUserTags != "test"cd
>>> (attributes.creation_date > 'today').format()
kMDItemFSCreationDate > $time.iso(2014-12-10T09:00:00)
>>> (attributes.creation_date < 'yesterday').format()
kMDItemFSCreationDate < $time.iso(2014-12-09T09:00:00)
>>> (attributes.logical_size >= 1000).format()
kMDItemLogicalSize >= 1000
>>> (attributes.logical_size <= 1000).format()
kMDItemLogicalSize <= 1000

You group MDComparison objects together to form MDExpression objects. An MDExpression object represents the query expression created when MDComparison objects and/or MDExpression objects are joined. There are only two ways to join elements in an MDExpression object: conjunction and disjunction. MDExpression uses the magic methods __and__ and __or__ to handle these relations. And, as hinted at above, an MDExpression object can consist of infinite units of either the MDComparison or MDExpression type. For example:

from metadata import attributes

comp1 = attributes.name == '*blank*'
comp2 = attributes.user_tags != 'test?'
comp3 = attributes.creation_date > 'today'
comp4 = attributes.creation_date < 'yesterday'
comp5 = attributes.logical_size >= 1000
comp6 = attributes.logical_size <= 1000

# `MDExpression` objects
exp1 = comp1 & comp2
exp2 = comp3 | comp4
exp3 = comp1 & (comp5 | comp6)
exp4 = (comp1 | comp2) & (comp3 | comp4)
exp5 = exp1 | comp1 | exp2

Once again, to see how the MDExpression object looks as a query string, use the <MDExpression>.format() method:

# exp1
kMDItemFSName == "*blank*"cd && kMDItemUserTags != "test?"cd
# exp2
kMDItemFSCreationDate > $time.iso(2014-12-11T09:00:00) || kMDItemFSCreationDate < $time.iso(2014-12-10T09:00:00)
# exp3
kMDItemFSName == "*blank*"cd && (kMDItemLogicalSize >= 1000 || kMDItemLogicalSize <= 1000)
# exp4
(kMDItemFSName == "*blank*"cd || kMDItemUserTags != "test?"cd) && (kMDItemFSCreationDate > $time.iso(2014-12-11T09:00:00) || kMDItemFSCreationDate < $time.iso(2014-12-10T09:00:00))
# exp5
((kMDItemFSName == "*blank*"cd && kMDItemUserTags != "test?"cd) || kMDItemFSName == "*blank*"cd) && (kMDItemFSCreationDate > $time.iso(2014-12-11T09:00:00) || kMDItemFSCreationDate < $time.iso(2014-12-10T09:00:00))

I personally really like the Operator Overloading, though I know that some people don't. I was going for a silky-smooth API, and I think this achieves that best. No need for wordy methods like <MDAttribute>.is_equal(predicate).

Okay, so once you generate your MDExpression object, this is what you will pass to metadata.find(). In addition to this one required argument, metadata.find() also has the optional argument only_in for you to focus the scope of your search to a particular directory tree. Other than that, there's nothing else to it. Build you query expression, pass it to find() and get your results as a Python list or string (depending on if there is more than one result). Here's an example of building an expression and passing it to find():

from metadata import attributes, find

comp1 = attributes.name == '*blank*'
comp2 = attributes.user_tags != 'test?'
comp3 = attributes.creation_date > 'today'

exp = comp1 & comp2 & comp3
find(exp)

In addition to find(), the module has ls, which is a wrapper around the mdls command. You simply pass it a file path and it returns a dictionary of metadata attributes and values. Once again, the attribute names (the dictionary keys) are simplified using the clean_key function seen above. Finally, there is an alpha version of a write() function, which allows you to write metadata to a file. Right now, I have it defaulted to writing to the kMDItemUserTags attribute, but a few others have worked. I need to test it more to make it more general.


As it stands, this is probably a bit too big and abstracted for inclusion in Alfred-Workflow as is, but I do think I could bring the find() function over separately. Once I document it properly, you can look it over and let me know what you think. If you think find() and its backend code would fix your TODO.

Issue with fold setting

I have a ZotQuery user who can't get ZotQuery to function. Whenever he turned on the debugger his output failed when Alfred-Workflow attempted to retreive the fold settings. It states that there is no JSON object. I have really no idea what exactly is going on or what the problem might be. You can see the short interaction and the full log here

Any help would be GREATLY appreciated.

web.py: Replace `Response.iter_content()` with `Response.save_to_path()`?

This came up in the course of #52 (handling gzipped content).

The problem is, Python can't decompress a gzipped data stream. It requires a file or file-like object, meaning the data must be in memory or on disk. And I'm reluctant to implement gzip content-encoding in Response.content if it can't also be handled by Response.iter_content().

My proposal is to replace Response.iter_content() with a Response.save_to_path() method.

My thinking is that the ability to stream HTTP data isn't so useful in an Alfred workflow. Workflows are short-running processes, so streaming is mostly useful for retrieving large files in the background without having to load all the data into memory.

If that is the case, a Response.save_to_path() method should be just as useful (and simpler to use), and would also make handling gzipped data possible without loading it all into memory.

It's possible that iter_content() might be able to handle gzipped content by piping the data through gunzip, but I haven't tested that yet.

I'd very much appreciate your thoughts/comments.

Alfred forum/GitHub issues URL for reporting errors

Just noticed an additional bit of cleverness by zhaowu (who I pinched the Workflow.run() error-catching idea from).

When a workflow throws an error, the log output could contain a link to the relevant Alfred forum or GitHub issues page, so users can report the error/get help more easily.

If a workflow uses the GitHub update mechanism, this could be done automatically.

Whaddya think?

Relation with `alfred-bundler`?

I remember reading about this in one of the issues in the alfred-bundler repo, but I can't find it anymore. How exactly with this interact with the bundler? I, for one, would love to have this be able to live as a utility in the bundler sandbox. I write lots of workflows for my own machine, and they all use alfred-workflow. I have like 10+ instances of it. I would love to have one version that I can hook into from all my workflows. With the new Alfred environment vars, it should be possible to work without being in the same directory as the .plist.

Plus, having one copy that all workflows hook into, it would be much easier to keep up-to-date. Now, I have some workflows in version 1.6, some w/ 1.7, and some with 1.8. It's such a pain to update them all.

So, is this possible? Is this on the road-map?

First run/version migration

This is related to #35 in that a version would have to be specified one way or another.

I've run into a problem where a new version of a workflow uses a different settings and cache format to the previous version. Not deleting or updating the old files means the workflow will throw an error.

I could just change the bundle ID and not worry about it, but I'm wondering if this might be a fairly common problem, so would it be worth adding some sort of solution to Alfred-Workflow?

A simple solution may be to add a first_run property to Workflow, which would be True if this particular version of the workflow had never run before. I imagine it would be implemented by checking for the existence of an empty file in the data directory, say, <datadir>/first_run/x.y.z. If it doesn't exist, first_run is True and the file is created. On subsequent runs, first_run would then be False.

That would, however, leave it up to the workflow author to devise whatever system is necessary to guess the format of the old settings/data/cache files and delete or update them as necessary.

Would it therefore be useful to have a kind of "migration framework" where a workflow author can register functions to be called to migrate data between versions, e.g.:

wf = Workflow(version='2.2')

wf.upgrade_manager.register('1.1', '1.2', upgrade_1_1_to_1_2)
wf.upgrade_manager.register('1', '2', upgrade_1_to_2)

Whaddya think?

Handling gzipped content in workflow.web

Well here's another edge-case...

It hit a problem using workflow.web retrieving data from a website. Whenever I attempted to access the text property, I got this error:

UnicodeDecodeError: 'utf8' codec can't decode byte 0x8b in position 1: invalid start byte

I fiddled and faddled for a while, and eventually I came to this StackOverflow thread. The chosen answer was correct, the data was gzipped. I implemented the fix and off I went.

I'm opening the issue because it took me 30 mins to find the answer, and then 5 more to implement it properly. The error type (UnicodeDecodeError) is far too common, and since most people (myself included) get all kinds of confused when dealing with bytes, strings, and Unicode, the answer is far from obvious. As the StackOverflow answer states, this state of affairs (gzipped data stream) is not uncommon. So, I propose that we bake this solution into workflow.web.

It appears that the start byte 0x8b is a marker of gzipped data, so we could simply apply and if, then catch in the content property field. If the first byte is 0x8b, then unzip and decode:

buf = StringIO.StringIO(<response object>.content)
gzip_f = gzip.GzipFile(fileobj=buf)
content = gzip_f.read()

add_item

In alp, I could pass a dictionary to its add_item method to get XML results. Whenever I try to pass the same dictionary to Workflow via the add_item method, I get the error:

TypeError: cannot serialize {'valid': True, 'subtitle': u'Schiesaro 1997.', 'uid': '121', 'arg': 'XFQN3U5J', 'title': u"The boundaries of knowledge in Virgil's Georgics", 'icon': 'icons/n_chapter.png'} (type dict)

Clearly this is a user error, but I'm wondering if it would be all that difficult to extend the add_item method to accept dictionaries with valid keys?

Qu: Most likely cause of self.raw being None?

I'm trying to figure out why one of my users is getting this error when using my Pandoctor workflow:

 File "pandoctor.py", line 226, in _formats
    lines = req.text.splitlines()
File "/Users/j*******/Library/Application Support/Alfred 2/Alfred.alfredpreferences/workflows/user.workflow.60CECC29-7BBE-4EE5-B5FC-C222BEEEFC55/workflow/web.py", line 220, in text
    return self.content
  File "/Users/j********/Library/Application Support/Alfred 2/Alfred.alfredpreferences/workflows/user.workflow.60CECC29-7BBE-4EE5-B5FC-C222BEEEFC55/workflow/web.py", line 205, in content
    self._content = self.raw.read()
AttributeError: 'NoneType' object has no attribute 'read'

During configuration, Pandoctor grabs the HTML of the Pandoc homepage and uses the README to build an index of all of its many flags and arguments. This error occurs when trying to get the text of the webpage. The internet connection isn't down (I've tested this, you get an error when trying to make the connection), but web.py has self.raw being None.

It works fine on my machine, so it must be some kind of environmental variable that I'm not thinking of, but I can't figure it out. I was wondering if you had any insight into what factors might cause web.py to return None for self.raw after a simple GET request?

Add data streaming?

Would it be possible/easily enough doable to add something akin to requests iter_contents method? That is, could streaming be added to access web resources to large to dump to memory in one lump?

Add option for logger not to print to console

In building my next workflow, I've implemented the caching functionality of alfred-workflow; however, this also means that every time I run test code from Sublime that uses caching, all the data I'm printing for my benefit is trapped and clustered within every call that alfred-workflow makes to the cache. I love the robust logging of what's happing in the script, and it's great for debug another user's problems (open the log and send me the output), but I really need a way to shut off the console printing when I'm building.

I tried to comment out these lines in the @property logger(self) code:

console = logging.StreamHandler()
...
console.setFormatter(fmt)
...
logger.addHandler(console)

However, this shut down all console printing (aside from direct print commands). So when an error would occur, it wouldn't print the full Python error report.

So, how can I tell alfred-workflow not to print its logs, but still ensure that Python is printing its logs to the console?

Module imports not working

I followed the 3rd party section of your site for importing modules.

I ran the following and the requests module installed to my workflow folder:

pip install --target=my-workflow-root-dir/lib python-lib-name requests

My code is as follows:

import sys
import argparse
from workflow import Workflow, web, PasswordNotFound

def main(wf):
    import requests

if __name__ == '__main__':
    wf = Workflow(libraries=['./lib'])
    sys.exit(wf.run(main))

Which matches your documentation, but when I run the workflow it tells me no module named requests installed. Normally I just use the built-in 'web' command but it's not handling the JSON the way I need it to.

Empty settings.json file

Hi @deanishe,

I had two recently reports said that the workflow crashed caused by an empty settings.json file. Because an empty file is not a valid JSON file, the function wf.settings.setdefault() will failed. A workaround right now is deleting the file so the code can create a new one later.

I don't have enough information to explain how that would happened. However, my proposal here is we could automatically recover the settings.json file by deleting/copying the broken one. What do you think about it?

BTW, how is v2 going? Sorry for not pay attention to the project for a long time.

Owen

Automated testing

I'm trying to finish up version 10.0 of ZotQuery (as you know), and one of the things I wanted to have in place before I published it was some sort of Unit Testing. As I learned from you, unit testing is a vital part of workflow development, especially on-going development. However, there are a few standard hurdles:

  • actually writing the tests (if you can introduce a bug in your code, you can also do so in your tests)
  • figuring out what should be expected for each test (for the assertEqual() statement)
  • covering your whole workflow (no blindspots for major bugs)

Now, ZotQuery in particular is very tricky to write tests for, because it is completely dependent upon a particular user's Zotero database. This means that writing the other half of the assertEqual statements requires finding things that will never change in my library. Moreover, I can never have a user run the tests to help me isolate a bug in their environment (as the assertEqual statement will be false by default, since they have a completely different database).

This is in addition to the ordinary hurdle of spending weeks (or even months) coding the actual workflow, only to have to then write all this new code just for the unit testing. It's exhausting, and therefore too often left undone.

I wanted to avoid this. First of all, I wanted to find another way to test without asserting equal (so any user can run it from any environment). Second, I wanted the testing to be automated. Third, I wanted the tests to have complete coverage. How to do this? Well, as you have advocated on the forums, the best way to write Alfred workflows (especially with Alfred-Workflow) is to write the workflow in an external Python script that you call via Alfred's bash interface (i.e. python zotquery.py search general "{query}"). Now, if you write your initial workflow in this manner, you actually do have a means of generating automated tests. Your code can work from the Terminal, and you have all of your workflow's bash commands stored in the info.plist file.

I have written a basic module to run all of your workflow's code for testing. It basically will get all script filters, and all of their script connectors, and run those trees. So, filter1 -> action1, action2, action3; filter2 -> action1, action2, action3. It will get an argument from the script filter to use in all of the connected actions, so those will work properly. The simple idea is to run every script that your workflow uses, and see what happens. If you have smart logging in your workflow, you can see what's happening in real time. It's not as strong as the assertEqual form of unit testing, but it's a hell of a lot easier to set up (basically no setup) and ensures good coverage.

I've posted my initial version here in this gist. I thought that this might be a good addition to Alfred-Workflow, maybe in a testing.py module. There could and should probably be a few things added (like some finer grained testing), but the skeleton is strong. Ideally, you would simply launch Terminal, type in a simple command (like the command at the very bottom of the gist), and let it run your entire workflow for you. Then you simply inspect the results.

Let me know your thoughts, suggestions, and opinions.
stephen

Coveralls

Since the test coverage is perfect, why not show it?
If you activate coveralls.io for this repo, I could file a pull request to enable it.
It works well with Travis!

py2.6 support?

Hey guys,
What are your thoughts about supporting Python 2.6?
Alfred 2.0 supports OS X 10.6+ which comes by default with 2.6.

I see in PIP that the package is marked as py2.6 compatible, however unittests in travis are not configured to run with py2.6 and from quick check it looks broken (for example the ({}).format doesn't work in 2.6, only ({0}).format).

Thanks!

Launch agent creation/management?

This was suggested as part of #47 (periodically cleaning the cache).

Personally, I'm not a fan of creating cron jobs/launch agents from a workflow, but I'd like to hear other people's opinions on the potential usefulness of them.

Writing unittests

So I'm starting to create unit tests for ZotQuery, and I'm struggling to get alfred-workflow to work with me. While I'm certain to have other issues, my initial problem concerns the assertEqual function within unittest.

Specifically, I can't assertEqual when running a test query through my filter script. Here's my function:

def test_filter(self):
        args = [u'tester', u'general']
        oargs = sys.argv[:]
        sys.argv = [oargs[0]] + [s.encode('utf-8') for s in args]
        xml_res = filter.main(self.wf)
        no_res = """<?xml version="1.0" encoding="utf-8"?>
<items><item valid="no"><title>Error!</title><subtitle>No results found.</subtitle><icon>icons/n_error.png</icon></item></items>"""
        try:
            self.assertEqual(xml_res, no_res)
        finally:
            sys.argv = oargs[:]

When I run filter.main(self.wf), it will print the "No Results" xml to the console, but I can't trap it in a var to assertEqual. How can I write unittests for a workflow written in alfred-workflow?

how to get args from alfred?

I can't get args from alfred using wf.args,how to get args from alfred?

#!/usr/bin/env python
# encoding: utf-8
import sys
from workflow import Workflow, web


def main(wf):
    arguments = wf.args
    wf.logger.debug(arguments)

the arguments I got is an empty list.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.