Coder Social home page Coder Social logo

rake-pipeline's Introduction

Rake::Pipeline

The canonical documentation for Rake::Pipeline is hosted at rubydoc.info.

New users are recommended to read GETTING_STARTED.md before anything else. Additional examples can be found in the examples directory.

Users are also recommended to checkout rake-pipeline-web-filters for commonly used filters.

rake-pipeline's People

Contributors

bobspryn avatar dmathieu avatar dudleyf avatar ebryn avatar joliss avatar krisselden avatar martoche avatar morgoth avatar roman2k avatar tchak avatar wagenet avatar wycats avatar zeppelin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

rake-pipeline's Issues

How to use rake-pipeline for assets development?

I've recently discovered the rake pipeline by looking at the Travis-CI setup, and it looks interesting to me, to separate assets for client-side and server-side code into separate Rails engines.
However, as I am new to the rake pipeline project, it's a bit unclear to me how to use this setup in development mode, where I prefer working with modular files.

  1. Using the Rake::Pipeline::ConcatFilter generates one big, "compiled" file, if I understand correctly, but in development mode, I just want to compile .less, .coffee to corresponding .css and .js without concatenation. How would I do this?
  2. If I use modular CSS frameworks, like bootstrap, I would accept the concatenation of files, however, there are custom variables, which I want to set separately in a custom less files. Is there a way to selectively concatenate files?

I could share some code of the experiments that I did, if it would be helpful.

Thanks for feedback!

Source map support plan?

Sorry if this isn't the correct venue for this kind of discussion. If there is a better place to have this discussion (like a mailing list), please let me know.

Source maps are files that define mappings from a generated output to one or many constituent inputs. This mapping can be used to determine the original source location (line and column) given an generated source location. For example, given a stack trace for an error in generated javascript, the source location of the originating coffee script could be located.

Another use case is finding the original name of something. For example, if a minification caused a token to be rewritten into something smaller, an error message containing the minified token could be rewritten to instead show the original token.

Some browsers have included support for source maps. The CoffeeScriptRedux project has some support for generating source maps. Google's Closure Compiler has vey good support for source maps. UglifyJS2 also has preliminary source map support.

The story with individual source map transformations is pretty straightforward. Given one or many input files, the transformation generates the one or many output files, each with its own source map.

With multiple layers of transformations, however, things get a little more complicated. The transformation needs to take as input any source files and their corresponding source maps in order to generate output with source maps that include the information from the input mappings. As far as I know, UglifyJS2 is the only project that has started to support input source maps.

With rake-pipeline's current filter API, I don't believe that multiple layers of transformations are possible. Ideally, source map support could be a library add-on, but I think that the rake-pipeline API needs to change in order to support source maps.

I would like to come up with a plan to support source maps. I am happy to help, if a consensus can be reached.

My proposal:

A rake-pipeline filter takes as input an array of FileWrapper objects. I would like to augment this object with an optional source_map property. If this property is not present, the file is the considered to be the original source file.

A filter that supports source maps would be responsible for reading the input FileWrappers' source maps and generating output FileWrappers having source maps.

The DSL would also need some way to indicate where the source map for any input file can be found, and where the output source map(s) should be placed.

Feature: Directory Matcher

Here's a use case:

/sprites contains many directories. Each directory represents one sprite.

How can we do this? Here's one possible solution:

Dir['/sprites/*'].each do "sprite_name"
  match "/sprites/#{sprite_name}/*.png" do
    sprite
  end
end

Problems with this use case: all sprites have to be known as pipeline load time.
Load time is not the same as build time (using the preview server).

Here's another solution: use a directory matcher to iterate over directories
and provide their contents as inputs to a filter.

directory "images/sprites" do
  sprite
end

I think this is a valid use case. The use case itself is abstract and useful in other scenarios.

I've included my rough implementation of this in the issue to see if there is interest in this
functionality.

I'd change this so that the DirectoryFilter can also be used in other pipelines. If the parent
pipeline is a DirectoryMatcher then do the custom behavior. Else call super.

module Rake::Pipeline::Web::Filters
  class DirectoryMatcher < Rake::Pipeline::Matcher

    # Take files matching the directory glob from the set of all files.
    # Return an array of array where each element in the array is
    # the contents of an input directory
    def eligible_input_files
      directory_files = input_files.select do |file|
        file.path.include? @glob
      end

      directory_files.inject({}) do |memo, input|
        directory_name = File.dirname(input.path)
        memo[directory_name] ||= []
        memo[directory_name] << input
        memo
      end.values
    end
  end

  class DirectoryFilter < Rake::Pipeline::Filter
    def initialize(*args, &block)
      block ||= proc { |input| File.basename input }
      super &block
    end

    # We are working with an array of arrays to act accordingly
    def input_files=(files)
      @input_files = files.map do |directory|
        directory.map { |f| f.with_encoding(encoding) }
      end
    end


    # Outputs look like this:
    # 
    #  {
    #    generated_output_name => directory_contents,
    #    generated_output_name => directory_contents.
    #  }
    #
    # This ensures the filter's `generate_output` method is called
    # once for each directory
    def outputs
      hash = {}

      input_files.each do |directory|
        directory_name = File.dirname directory.first.path

        path = output_name_generator.call(*directory_name)

        output_file = file_wrapper_class.new(output_root, path, encoding)

        hash[output_file] = directory
      end

      hash
    end

    def output_files
      outputs.keys
    end

    # Just like normal
    def generate_output(inputs, output)
    end
  end

  module PipelineHelpers
    def directory(pattern, &block)
      matcher = pipeline.copy(DirectoryMatcher, &block)
      matcher.glob = pattern
      pipeline.add_filter matcher
      matcher
    end

    def directory_filter(*args, &block)
      filter Rake::Pipeline::Web::Filters::DirectoryFilter, *args, &block
    end
  end
end

Dynamic tasks make simple pipelines re-compile on subsequent runs

I'm seeing stuff like this:

# with the following pipelines calling `rakep` repeatedly will re-generate output for both coffee_script 
# and minispade for all matched files over and over.
#
# when using just either the coffee_script or minispade pipeline `rakep` will only generate files once
# on repeated executions.


require 'rake-pipeline-web-filters'

output 'public/scripts'
input 'assets/scripts' do
  match '**/*.coffee' do
    coffee_script
  end

  match '**/*.js' do
    modules = proc { |input| input.path.gsub(%r((^app/|lib/|\.js$)), '') }
    minispade(string: true, rewrite_requires: true, module_id_generator: modules)
  end
end

and

# also, this compiles handlebars only once, as expected:

output 'public/scripts'
input 'assets/scripts' do
  match '**/*.hbs' do
    keyname = proc { |input| input.path.sub(%r(^app/templates/), '').sub(/\.hbs$/, '') }
    handlebars(:key_name_proc => keyname)
  end

  match '**/*.hbs' do
    concat 'templates.js'
  end
end

# while this, again, will compile handlebars over and over:

output 'public/scripts'
input 'assets/scripts' do
  match '**/*.hbs' do
    keyname = proc { |input| input.path.sub(%r(^app/templates/), '').sub(/\.hbs$/, '') }
    handlebars(:key_name_proc => keyname)
    concat 'templates.js'
  end
end

When downgrading to 3465e0e it seems to go away.

Benchmark / performance tests

We've had some significant performance regressions sneak in in the past (you're welcome). We should have some benchmarks/performance tests to guard against that in the future.

Find a way to automatically generate ember.js or other frameworksโ€™s libs.

When working on an application we have a bunch of frameworks that are cloned directly, and we need to find a way to generate the frameworkโ€™s libraries when we made a change on it, so we can work without breaking :D, because we actually generate the ember.js manually by running rakep dist, and same for sproutcore-touch, sproutcore-routing and so on... and we can also add our own small framework.

I think that the idea is to find all other Assetfile, and chain it when something changes.

Pull Requests?

I have 4 pull requests (#113, #112, #111, #88) currently open and I'm about to send another. I understand if no one has had time to review them, but if I'm dong something wrong (or just not useful) can someone please let me know? I'd like to improve for these and future pull requests. Thanks!

Cannot pass inputs from one filter to the next

I can no longer pass output from one filter to the next. I created an example with an Assetfile based on the Writing Filters section in GETTING_STARTED.md. When I run rakep against the latest head I get the following exception:


/Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:350:in `block in setup_filters': Temporary files cannot be input! /Users/joe/Sites/test-layouts/_tmp/rake-pipeline-3ea479df1be59c42b0b3e6f10af7e15863ca2c67/rake-pipeline-5399765c-tmp-1/a.js is inside a pipeline's tmp directory (Rake::Pipeline::TmpInputError)
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:349:in `each'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:349:in `setup_filters'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:370:in `block in setup_filters'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:355:in `each'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:355:in `inject'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:355:in `setup_filters'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:334:in `setup'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:322:in `block in invoke'
    from <internal:prelude>:10:in `synchronize'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline.rb:317:in `invoke'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline/project.rb:121:in `each'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline/project.rb:121:in `block in invoke'
    from <internal:prelude>:10:in `synchronize'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline/project.rb:112:in `invoke'
    from /Users/joe/Projects/oss/rake-pipeline/lib/rake-pipeline/cli.rb:19:in `build'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/gems/thor-0.16.0/lib/thor/task.rb:27:in `run'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/gems/thor-0.16.0/lib/thor/invocation.rb:120:in `invoke_task'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/gems/thor-0.16.0/lib/thor.rb:275:in `dispatch'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/gems/thor-0.16.0/lib/thor/base.rb:425:in `start'
    from /Users/joe/Projects/oss/rake-pipeline/bin/rakep:4:in `<top (required)>'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/bin/rakep:23:in `load'
    from /Users/joe/.rbenv/versions/1.9.3-p194/gemsets/test-layouts/bin/rakep:23:in `<main>'

I am pretty sure this is a bug, shouldn't that work?

Circular Dependency Error

I'm getting a circular dependency error invoke_call_chain where:

tmp/tmp-1/A => original/A => tmp/tmp-2/A => tmp/tmp-1/A

It makes sense that tmp/tmp-2/A would depend on tmp/tmp-1/A, as that's part of the filter chain, but why would the original file depend on a tmp file? My filter's additional_dependencies method returns [] and there are no dynamic prereqs. Are there any cases where this is expected to happen?

Thanks!
Joe

Rake::Pipeline::Graph?

I'm building an app on top of Rake::Pipeline and I'm trying to implement a feature similar to Jekyll and Middleman's YAML front matter rendering. I just came across the Rake::Pipeline::Graph object and I think it could provide exactly what I need. Am I correct in assuming that this is there to use if needed and implement for our own case? Am I safe relying on it or is it something that may go away in the near future?

Output of multiple match filters concats to the same output file

I would like all my javascript being concatanated to one file

<script type="text/javascript" src="source/app.js"></script>

My current AssetFile which does not work would be something like:

input "assets/vendor" do
  match "*.js" do
           filter Rake::Pipeline::OrderingConcatFilter,
            ["minispade.js", "qrcode.js", "jquery.js", "jquery.transit.js"], "app.js"
  end

end




input "packages" do

match "*/lib/**/*.js" do
  minispade :rewrite_requires => true, :string=> false, :module_id_generator => proc { |input|
    id = input.path.dup
    id.sub!('/lib/', '/')
    id.sub!(/\.js$/, '')
    id.sub!(/\/main$/, '')
    id
  }

    filter ConcatFilter, "app.js"
  end

end

input "app" do

  match "*/lib/**/*.js" do
    minispade :rewrite_requires => true, :string=> false, :module_id_generator => proc { |input|
    id = input.path.dup
    id.sub!('/lib/', '/')
    id.sub!(/\.js$/, '')
    id.sub!(/\/main$/, '')
    id
  }
   filter ConcatFilter, "app.js"
   end

end

Any workaround to make that possible?

ERROR Rack::Lint::LintError: The file identified by body.to_path does not exist

I often see this error in the logs of rakep server

ERROR Rack::Lint::LintError: The file identified by body.to_path does not exist
    /Users/rajat/.rvm/gems/ruby-1.9.2-p320/gems/rack-1.4.1/lib/rack/lint.rb:19:in `assert'
    /Users/rajat/.rvm/gems/ruby-1.9.2-p320/gems/rack-1.4.1/lib/rack/lint.rb:543:in `each'
    /Users/rajat/.rvm/gems/ruby-1.9.2-p320/gems/rack-1.4.1/lib/rack/body_proxy.rb:26:in `method_missing'
    /Users/rajat/.rvm/gems/ruby-1.9.2-p320/gems/rack-1.4.1/lib/rack/chunked.rb:23:in `each'
    /Users/rajat/.rvm/gems/ruby-1.9.2-p320/gems/rack-1.4.1/lib/rack/handler/webrick.rb:71:in `service'
    /Users/rajat/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/webrick/httpserver.rb:111:in `service'
    /Users/rajat/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/webrick/httpserver.rb:70:in `run'
    /Users/rajat/.rvm/rubies/ruby-1.9.2-p320/lib/ruby/1.9.1/webrick/server.rb:183:in `block in start_thread'

This mostly costs me an extra refresh or restarting the server as some of the assets on the requested page are not loaded.

Any pointers why?

Project status and volunteering to help

Hello rake-pipeline team,

I've been periodically coming back to this repo after discovering it a while back only to find it getting dustier and dustier. Under normal circumstances I would jump in and start working some issues but that doesn't seem like the right tactic here.

Is there anything I can do to help out? I know there are several other people that would help as well. Can we come up with a way to get some more contributors set up so the pipeline can move forward?

Thanks so much!

is v0.6.0 gem officially released?

I didn't see it on rubygems.org, although it seems to have been around since Feb. Is rake-pipeline still being actively developed?

Thanks,
Eric

rakep 'watch' option

I would love to have a rakep watch option, where it just watches for updated files and builds updated files. This would eliminate the need to use the :9292 port when addressing the files during development. This would make it easier to use in lots of different environments, for instance alongside different backends like GAE that run on their own ports in development.

Input directory could work with symbolic links.

To improve package management, i would like to place symbolic links in my Assetfile input folder.

On this way, i could reuse my packages on different projects and all the projects will use the same instance of my package. I could only need to update the global package to update all the symlinks as well.

This feature is not currently supported, now it won't output any files when using symbolic links.

Save additional key/values to manifest file

I'm in a situation where I need to save some metadata about each file in a site between builds. My initial thought was to save a json file with this metadata in it; since rake-pipeline is already doing that with the manifest file, it would be great to be able to merge a hash with some additional key/values into the one in ManifestEntry. For example, I might have a manifest like:

{
  "about.html": {
    "deps": {
      "other.h": "2000-01-01 00:00:00 +0000"
    },
    "mtime":"2000-01-01 00:00:00 +0000",
    "type": "page",
    "layout": "layout",
    "title": "about"
  }
}

I think this could be done pretty easily if manifest_entry allowed for specifying an additional hash using a block that gets configured at some point before the pipeline is invoked. Is there any interest in something like this?

FileWrapper.fullpath fails on Windows

FileWrapper.fullpath expects root to start with a '/'. In windows this fails because root path starts with a 'drive-letter:'

      def fullpath
        raise "#{root}, #{path}" unless root =~ /^\//
        File.join(root, path)
      end

Projects & Pipelines are confusing

The difference between projects and pipelines is confusing. They each have their own methods implemented via the DSL. It's also unclear which is instantiated by default. I think some documentation is needed to clear this up.

Declaring dependencies of inputs

I think the maintainers are yet notified about this feature. With the current implementation, if i use import statements which points to other files, and updates the import/dependent file, this won't be added to the next output file.

Below, i wanted to share an api suggestions.

input "less" do

  match "views/*.less" depends "import/*.less" do
     filter CustomFilter
     concat "project.css"
  end

end

Should we copy all files by default?

1930991 added a concat filter to the end of every pipeline so that all unmatched inputs get copied to the output directory. Is this actually a good idea? Now everything gets copied whether we want it to or not. I think we either need to be able to exclude files somehow, or we revert back to making people add their own concat filter to the end if they want it. I'm leaning toward the latter, since it's much easier to implement :)

Make provisions for filters that can output two files

I'm trying to concatenate all my scripts into app.raw.js (uncompressed) and and app.min.js (compressed) using rake-pipeline-web-filters. Here's my solution:

['raw', 'min'].each do |type|
  input "src" do
    match "*.coffee" do
      coffee_script
      concat %w[
        start.js
        base.js
        line.js
      ], "app.#{type}.js"

      uglify  if type == 'min'
    end
  end
end

...However, it feels hackish because this will needlessly invoke coffee_script twice per file. I was expecting a solution more along the lines of:

input "src" do
  match "*.coffee" do
    coffee_script
    concat %w[
      start.js
      base.js
      line.js
    ], "app.js"
  end

  match "app.js" do
    copy "app.raw.js"
    copy "app.min.js" # <-- doesn't work: this effectively renames app.raw.js to app.min.js, not copying it.
  end

  match "app.min.js" do
    uglify
  end
end

Rake tasks are invoked when they shouldn't be.

I've been chasing down an issue related to "caching" in my application's pipeline. I think I've finally found the problem. I'm assuming this is true: I build the pipeline. I don't change anything. Building the pipeline again should "skip" everything since no input files have changed. IE: only rebuild the pipeline if input files have changed.

This behavior is accomplished through Rake::FileTasks. Each filter in the pipeline is a rake rask that setups up a task with preqreqs (the input files). Rake::FileTask decides if a task should be invoked by looking at itself and it's preqreqs. If anyone of those need to be invoked it will be done. Rake::FileTask simply checks File.mtime to see if a file as changed. It compares the preqreq timestamp to the file itself. If the prereq is newer than the file itself then the task needs to be invoked again. This is the call to super in the Rake::Pipeline::DynamicFileTask#needed?. The class also does it's own logic but that's unimportant because it will return if super is true.

Internally all files in the build process are copied into a tmp directory. This make sense. However there is a problem. The tmp directory is unique per process. The variable is incremented automatically every time the method is called. Here's the link to the method: https://github.com/livingsocial/rake-pipeline/blob/master/lib/rake-pipeline.rb#L389. This means that isolated builds of the same pipeline (inputs and assetfile) will generate the same number of temp directories. The intermediate build files should be the same and be skipped accordingly. The problem occurs when multiple build occur from the same process. This is the exact use case for the preview server--however I'm not sure if this the intended use case.

Building the pipeline multiple times in the same process will continually increment the class variable. The second time the pipeline is invoked, files will be placed into new temporary directories. The dependencies are also wired up. However since a new file is created every time, File.mtime will always be newer than before causing rake tasks to be invoked when they shouldn't. I think this is a major problem.

Here is my debugging session from deep inside rake to find out what bit of code is actually making that decision. It's from inside this method: https://github.com/jimweirich/rake/blob/master/lib/rake/file_task.rb#L32

(rdb:1) name
# output from first run of the pipeline
"/Users/adam/radium/frontend/tmp/builds/javascript/vendor.js"
# now its timestamp (it's more in the past because it was built in the first build)
(rdb:1) timestamp
2012-10-11 22:01:50 +0200

# it's prereq dumped into a fresh new tmp directory
(rdb:1) application[prereq, @scope].name
"/Users/adam/radium/frontend/tmp/rake-pipeline-e482cc0c93b30eef7b1d05fc2b90385121075db2/rake-pipeline-tmp-26/vendor.js"

# and its timestamp, this now fails the comparison and everything is built again.
(rdb:1) application[prereq, @scope].timestamp
2012-10-11 22:01:52 +0200

Here is the assetfile used in my project: https://github.com/radiumsoftware/iridium/blob/master/lib/iridium/Assetfile

@dudleyf @wycats can you confirm this is a bug?

Support multiple outputs for the same input

A CacheBustingFilter would ideally behave as follows:

  • take input from input/foo.css
  • write it directly to output/foo.css
  • also write to output/foo-abcde123456.css

I would imagine the following syntax:

filter MyFilter do |input|
  [ input, hashify(input) ]
end

(where hashify is something MyFilter defines).

Pass data from one filter to another?

Let's say I want to add a filter that allows template rendering options (mainly the layout) to be specified in YAML front matter. It seems to me that filters should be small and composable; therefore I don't want to force my Tilt filter to also handle front matter. How would I write a filter in such a way that it can parse out the front matter then hand that off to the next filter?

Implement a Pass-Through filter

A pass through filter is like a
matcher but allows you to match like a matcher and select files
outside the glob as well. Also the outputs from the filters do not
directly correspond to the inputs. Here's an example:

# This line has to allow all files because eventually
# `index.html` is needed down the line by a filter. 
# `index.html` must be an input so its content can be
# read and its content be replaced.
allow "*.handlebars" do
  # works like a normal filter but by default only uses
  # the pipeline's matched files. Matched files match the
  # the glob specified in `allow`
  handlebars_script_tag

  # This filter takes the outputs of the previous filter
  # HTML script tags and injects them into `index.html`. 
  # This filter must read index.html as an input and 
  # write to it as an output. The trick here is to 
  # keep the matched files the inputs so they can 
  # be manipulated by other filters.
  inject_script_tag "index.html"

  # Other filters can request files outside the main
  # glob as inputs at this point and continue there work.
  # It's important to note that inputs to filters at this
  # point are not "index.html". They are all the files 
  # in the pipeline.
end

Implementing a pass through filter also makes generating a cache manifest easy. Simply match everything and generate an output. The original inputs are still there. This also encompasses the idea that sometimes you don't want to destroy input files, just use them.

I have implemented something like this for use in Iridium. I'm wondering if it can be adapter and pushed upstream.

Release a new vesion

The deletion fix was important enough to be in a patch version. I don't see anything changing for a while. Can we get a release please? I don't like having to bind iridium to a git repo.

Ordering not respected in pipelines with multiple inputs

input accepts an array. I've constructed that array in the proper order. I know that rake-pipeline loops over all the inputs in array order. c226368 created this bug. This is only issue when using multiple inputs. The tests reflect using a single input. I don't think glob order is a problem. I think sort should only be applied to each input glob and not the final set.

Here's an example array:

array = [ 
  '/Users/adam/radium/iridium/test/app/external/app/config/initializers',
  '/Users/adam/radium/iridium/test/app/app/config/initializers'
]

input array do
  # test/app/app/config/initializers is processed before test/app/external/app/config/initializers
  # because the entire array is sorted at the end of Pipeline#input_files

We were seeing some misencoded characters

We need to investigate whether there are cases where characters become misencoded. This may well be resolved, but we were seeing it in the early days of rake-pipeline and should investigate.

Should be able to exclude files

We should have some sort of facility for excluding files from the pipeline:

input "assets", "**/*", :exclude => "*.txt"

or maybe

input "assets"
exclude "*.txt"

Server Middleware blows up if pass Project instead of Pipeline

I came across this in iridium. I had created a Project from an assetfile and stupidly passed this to the middleware when it expects a Pipeline or Assetfile. A Project is instantiated from the pipeline argument which creates nested projects. Project don't have output_root so you get weird errors.

Tests failing?

Have been trying to get rake-pipeline to build a very simple Assetfile (just has input and output statements), but fails every single time (have tested this only on ruby 2.1.1) when trying to open the manifest.json file with @ rb_sysopen error for tmp/rake-pipeline-[sha1]/manifest.json.

Have tried running tests on these ruby versions:

   ruby-1.9.3-p286 [ x86_64 ]
   ruby-1.9.3-p484 [ x86_64 ]
   ruby-1.9.3-p545 [ x86_64 ]
   ruby-2.0.0-p353 [ x86_64 ]
   ruby-2.1.1 [ x86_64 ]

Tests are failing for all these versions. Here are the failing tests:

242 examples, 44 failures

Failed examples:

rspec ./spec/filter_spec.rb:306 # Rake::Pipeline::Filter a filter with additional_dependencies creates its output files
rspec ./spec/manifest_spec.rb:50 # Rake::Pipeline::Manifest#write_manifest writes nothing if it's empty
rspec ./spec/manifest_spec.rb:43 # Rake::Pipeline::Manifest#write_manifest writes a manifest json file to disk
rspec ./spec/encoding_spec.rb:72 # the pipeline's encoding handling when the input is UTF-8 creates the correct file
rspec ./spec/encoding_spec.rb:89 # the pipeline's encoding handling when dealing with only BINARY-type filters does not raise an exception
rspec ./spec/rake_acceptance_spec.rb:652 # A realistic project should work with nested matchers
rspec ./spec/rake_acceptance_spec.rb:711 # A realistic project Handling mistakes should not raise an error for internal temporary files
rspec ./spec/rake_acceptance_spec.rb:589 # A realistic project Dynamic dependencies transitive dependencies it should behave like a pipeline with dynamic files should handle dynamic dependencies being deleted
rspec ./spec/rake_acceptance_spec.rb:589 # A realistic project Dynamic dependencies direct dependencies it should behave like a pipeline with dynamic files should handle dynamic dependencies being deleted
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL using multiple pipelines behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL using multiple pipelines behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL using multiple pipelines behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL using multiple pipelines behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL the raw pipeline DSL (with block strip_asserts_filter) behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL the raw pipeline DSL (with block strip_asserts_filter) behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL the raw pipeline DSL (with block strip_asserts_filter) behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL the raw pipeline DSL (with block strip_asserts_filter) behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL using the matcher spec behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL using the matcher spec behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL using the matcher spec behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL using the matcher spec behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL the raw pipeline DSL (with simple strip_asserts_filter) behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL the raw pipeline DSL (with simple strip_asserts_filter) behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL the raw pipeline DSL (with simple strip_asserts_filter) behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL the raw pipeline DSL (with simple strip_asserts_filter) behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL the raw pipeline DSL (with before_filter) behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL the raw pipeline DSL (with before_filter) behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL the raw pipeline DSL (with before_filter) behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL the raw pipeline DSL (with before_filter) behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL using multiple pipelines (with after_filters) behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL using multiple pipelines (with after_filters) behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL using multiple pipelines (with after_filters) behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL using multiple pipelines (with after_filters) behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:197 # A realistic project using the pipeline DSL using the matcher spec (with multiple inputs to a single pipeline) behaves like the pipeline DSL can be configured using the pipeline DSL
rspec ./spec/rake_acceptance_spec.rb:233 # A realistic project using the pipeline DSL using the matcher spec (with multiple inputs to a single pipeline) behaves like the pipeline DSL can be restarted to reflect new files
rspec ./spec/rake_acceptance_spec.rb:208 # A realistic project using the pipeline DSL using the matcher spec (with multiple inputs to a single pipeline) behaves like the pipeline DSL can be invoked repeatedly to reflected updated changes
rspec ./spec/rake_acceptance_spec.rb:202 # A realistic project using the pipeline DSL using the matcher spec (with multiple inputs to a single pipeline) behaves like the pipeline DSL can be configured using the pipeline DSL with an alternate Rake application
rspec ./spec/rake_acceptance_spec.rb:133 # A realistic project a pipeline supports filters with multiple outputs per input
rspec ./spec/rake_acceptance_spec.rb:111 # A realistic project a pipeline can successfully apply filters
rspec ./spec/rake_acceptance_spec.rb:168 # A realistic project a pipeline can be configured using the pipeline
rspec ./spec/project_spec.rb:81 # Rake::Pipeline::Project has a pipeline
rspec ./spec/project_spec.rb:156 # Rake::Pipeline::Project #cleanup_tmpdir leaves the current assetfile-digest tmp dir alone
rspec ./spec/project_spec.rb:150 # Rake::Pipeline::Project #cleanup_tmpdir cleans old rake-pipeline-* dirs out of the pipeline's tmp dir
rspec ./spec/project_spec.rb:114 # Rake::Pipeline::Project #invoke writes temp files to a subdirectory of the tmp dir named after the assetfile digest

Any ideas?

Assets broken on windows

On windows no static assets can be served (js, css, images).
I always get a 404 error message although all assets are correctly deployed.

I already tried changing the fullpath as described in issue #60.

Easy way to add gems' assets as input directories

For example, I'd like to be able to have a project with bootstrap-sass in the Gemfile and easily be able to add its vendor/assets/* folders to the input paths. It doesn't have to be as auto-magical as Rails+Sprockets (where anything in lib/assets/* is added to Sprockets's path), but it would be nice to have a shorter version of

Dir.glob(Pathname.new(Gem.loaded_specs["bootstrap-sass"].gem_dir).join('vendor/assets/*').to_s).each do |path|
  input path
end

Serving content on Windows corrupts some files

Hi!

I have been investigating an issue with corrupted png:s, and have been able to track down at least one place where it does not work as expected.

I had a non-corrupted png-file in the assets folder, but when served through the rakep server the browser was receiving it corrupted.

I was able to fix that by changing to a binary file.open in Middleware.rb:

  def response_for(file)
    [ 200, headers_for(file), File.open(file, "rb") ]
  end

I am also seeing that the same non-corrupted png-file in the tmp-directory will be corrupted once it is copied in to the assets folder. I assume that the fix for it would be similar, but I was unable to track down exactly what part of the code base is responsible for copying the files from the tmp-folders to the assets folder.

copy filter work wrong on binary files

when i try to copy file like images (ico, png) always the size grows

AssetFile

$: << 'lib'

require 'rake-pipeline'


output  'src/images'
input 'public/images' do
  match '*.png' do
    copy
  end
end

Command used

$ rakep

Enviroment

Ruby 1.9.3p194 (2012-04-20) [i386-mingw32]
Windows 8 x64 bits

gem rake-pipeline 0.7.0
gem rake-pipeline-web-filters 0.7.0

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.