What’s new

08.12.17We published evaluation results for Information-Flow Matting [20].
07.04.17Subjective study results are now available.
26.12.16We published evaluation results for Self-Adaptive Matting [17].
16.11.16We published evaluation results for Deep Matting [16].
6.04.16We published evaluation results for Sparse Sampling Matting [14].
15.12.151) New sequences with natural hairs including 3 public sequences.
2) New metrics of temporal coherency chosen by careful analysis (see [3]).
3) New trimap generation method (more natural-looking and accurate).
4) Better ground-truth quality owing to correction of lightning changes during capturing.
5) Improvement in website loading speed and interface.
7.09.15We published the paper with our benchmark description [3].
30.12.14We published results for multiple levels of trimaps; use drop-down menu at the top left corner to switch levels.
29.12.14We added general ranking to the rating table.
10.11.14“Sparse codes as Alpha Matte” was added.
26.09.14Source sequences are available for online view now. Full screen mode was added.
30.08.14Composite sequences are available now.
27.08.14“Refine edge tool in Adobe After Effects” was added.
25.08.14The official opening.

Overview

Introduction

The Video­Mat­ting pro­ject is the first pub­lic ob­jec­tive bench­mark for video-mat­ting meth­ods. It con­tains scat­ter plots and rat­ing ta­bles for dif­fer­ent qual­ity met­rics. In ad­di­tion, re­sults for par­tic­i­pat­ing meth­ods are avail­able for view­ing on a player equipped with a mov­able zoom re­gion. We be­lieve our work will help rank ex­ist­ing meth­ods and aid de­vel­op­ers of new meth­ods in im­prov­ing their re­sults.

Datasets

The data set con­sists of five mov­ing ob­jects cap­tured in front of a green plate and seven cap­tured us­ing the stop-mo­tion pro­ce­dure de­scribed be­low. We com­posed the ob­jects over a set of back­ground videos with var­i­ous lev­els of 3D cam­era mo­tion, color bal­ance, and noise. We pub­lished ground-truth data for two stop-mo­tion se­quences and hid the rest to en­sure fair­ness of the com­par­i­son.

Us­ing thresh­old­ing and mor­pho­log­i­cal op­er­a­tions on ground-truth al­pha mattes, we gen­er­ated nar­row trimaps. Then, we di­lated the re­sults us­ing graph­cut-based en­ergy min­i­miza­tion which pro­vides us with more hand­made-look­ing trimaps than com­mon mor­pho­log­i­cal di­la­tion.

Chroma Keying

green screen
stop motion
Alpha mattes from chroma keying and stop-motion capture for the same image region. The stop-motion result is significantly better at preserving details.

Chroma key­ing is a com­mon prac­tice of the cin­ema in­dus­try: the cin­e­matog­ra­pher cap­tures an ac­tor in front of a green or blue screen, then the VFX ex­pert re­places the back­ground us­ing spe­cial soft­ware. Our eval­u­a­tion uses five green-screen video se­quences with a sig­nif­i­cant amount of semi­trans­parency (e.g., hair or mo­tion blur), pro­vided to us by Hol­ly­wood cam­era work. We ex­tract al­pha mattes and cor­re­spond­ing fore­grounds us­ing The Foundry Key­light. Chroma key­ing en­ables us to get al­pha mattes of nat­ural-look­ing ob­jects with ar­bi­trary mo­tion. Nev­er­the­less, this tech­nique can’t guar­an­tee that the al­pha maps are nat­ural, be­cause it as­sumes the screen color is ab­sent from the fore­ground ob­ject. To get al­pha maps that have a very nat­ural ap­pear­ance, we use the stop-mo­tion method.

Stop Motion

One-step capture over different backgrounds. We use checkerboard backgrounds instead of solid ones to eliminate screen reflection.

We de­signed the fol­low­ing pro­ce­dure to per­form stop-mo­tion cap­ture: A fuzzy toy is placed on the plat­form in front of an LCD mon­i­tor. The toy ro­tates in small, dis­crete steps along a pre­de­fined 3D tra­jec­tory, con­trolled by two ser­vos con­nected to a com­puter. Af­ter each step the dig­i­tal cam­era in front of the setup cap­tures the mo­tion­less toy against a set of back­ground im­ages. At the end of this process, the toy is re­moved and the cam­era again cap­tures all of the back­ground im­ages.

We paid spe­cial at­ten­tion to avoid­ing re­flec­tions of the back­ground screen in the fore­ground ob­ject. These re­flec­tions can lead to false trans­parency that is es­pe­cially no­tice­able in non­trans­par­ent re­gions. To re­duce the amount of re­flec­tion we used checker­board back­ground im­ages in­stead of solid col­ors, thereby ad­just­ing the mean color of the screen to be the same for each back­ground.

At the end we cor­rected global light­ing changes caused by light bulb flick­er­ing. Thus fi­nally we ob­tain al­pha mattes with less than 1% of noise level. The de­tailed de­scrip­tion of ground-truth ex­trac­tion meth­ods is given in [3].

Evaluation Methodology

Our com­par­i­son in­cludes both im­age- and video-mat­ting meth­ods. We ap­ply each mat­ting method to the videos in our data set, and then com­pare the re­sults us­ing the fol­low­ing met­rics of per-pixel ac­cu­racy and tem­po­ral co­herency (look into our pa­per [3] for com­par­i­son of dif­fer­ent met­rics):

Here  de­notes to­tal num­ber of pix­els,  and  de­note trans­parency val­ues of video mat­ting un­der con­sid­er­a­tion and ground truth cor­re­spond­ingly at pixel  of frame , and  de­notes mo­tion vec­tor at pixel . We use op­ti­cal-flow al­go­rithm [11] com­puted for ground-truth se­quences. It is worth not­ing that mo­tion-aware met­rics will not give un­fair ad­van­tage to mat­ting meth­ods based on the sim­i­lar mo­tion es­ti­ma­tion method since they do not have ground truth se­quence. The de­tailed de­scrip­tion of used qual­ity met­rics is given in [3].

Public Sequences

For the train­ing pur­poses we pub­lish here three test se­quences with their ground-truth trans­parency maps. De­vel­op­ers and re­searchers are wel­come to use these se­quences, but we ask to cite us [3]

Participate

We in­vite de­vel­op­ers of video-mat­ting meth­ods to use our bench­mark. We will eval­u­ate the sub­mit­ted data and re­port scores to the de­vel­oper. In cases where the de­vel­oper specif­i­cally grants per­mis­sion, we will pub­lish the re­sults on our site. We can also pub­lish anony­mous scores for blind-re­viewed pa­pers. To par­tic­i­pate, sim­ply fol­low these steps:

  1. Download the data set containing our sequences: City, Flowers, Concert, Rain, Snow, Vitaliy, Artem, Slava, Juneau, Woods,
  2. Apply your method to each of our test cases
  3. Upload the alpha and foreground sequences to any file-sharing service. We kindly ask you to maintain these naming and directory-structure conventions. If your method doesn't explicitly produce the foreground images you can skip uploading them; in this case, we will generate them using method proposed in [7].
  4. Fill in this form to provide information about your method

Con­tact us by email with any ques­tions or sug­ges­tions at ques­tions@video­mat­ting.com.

Cite Us

To re­fer to our eval­u­a­tion or test se­quences in your work cite our pa­per [3].


@inproceedings{Erofeev2015,
	title={Perceptually Motivated Benchmark for Video Matting},
	author={Mikhail Erofeev and Yury Gitman and Dmitriy Vatolin and Alexey Fedorov and Jue Wang},
	year={2015},
	month={September},
	pages={99.1-99.12},
	articleno={99},
	numpages={12},
	booktitle={Proceedings of the British Machine Vision Conference (BMVC)},
	publisher={BMVA Press},
	doi={10.5244/C.29.99},
	isbn={1-901725-53-7},
	url={https://dx.doi.org/10.5244/C.29.99}
}
                

Evaluation

Rating

Trimap size:
Trimap available for each frame
year
rank
city
rain
concert
flowers
snow
Slava
Vitaliy
Artem
juneau
woods
Bayesian Matting [2]200110.658.761251.661010.796144.531130.361153.281143.701170.501296.6910140.1212
Robust Matting [12]20076.337.54826.1529.61388.15518.52633.31834.15546.91978.56980.418
Refine Edge [15]20138.137.78938.4888.642129.56927.471031.36736.65938.756102.021194.5010
Closed Form [7]20085.931.34431.80510.968149.481215.86323.67530.15330.60369.94782.139
Learning Based [13]20093.929.98233.08610.847134.821015.23223.35331.46430.20158.38256.602
Nonlocal mattings [5]201112.754.211192.371333.8613243.471445.131364.051361.751367.3011135.0813171.7913
Shared Matting [4]20106.936.79746.72910.30587.64424.41835.18935.26745.23868.52667.176
Comprehensive Samplings [10]20085.934.48630.18412.10979.60216.17425.02635.25641.05777.72870.187
KNN Matting [1]201211.967.831392.221264.7314112.46832.961261.501250.871279.2013113.4112123.6311
Spectral Mattings [6]201213.586.3214105.241413.1510210.041348.761465.721468.8314109.4414157.1814182.8214
Sparse Samplings [14]20164.933.42533.5079.94481.06318.42523.64435.83838.00565.17464.124
Deep Matting [16]20162.827.33121.40120.331144.93119.23719.99130.00230.55246.18146.391
Self-Adaptive [17]20163.030.05329.3038.02196.44714.32121.96229.92135.97466.05561.493
Information-Flow [20]20178.650.811059.621124.801294.81625.93948.821042.141055.321064.02366.565
Bayesian Matting [2]200111.773.181267.921116.0610157.281239.591260.371156.051285.5313115.8812169.3312
Robust Matting [12]20077.648.19932.89514.72899.81525.96742.51838.88759.08993.79996.139
Refine Edge [15]20137.742.23841.74810.451127.02931.401035.08739.46842.916108.711097.7610
Closed Form [7]20085.734.53435.86613.355154.121121.64627.19533.01335.45381.48692.758
Learning Based [13]20094.333.44337.63713.234138.311019.95526.51334.51434.70268.41363.532
Nonlocal mattings [5]201112.555.041197.851231.8112242.751446.941367.111365.091369.0111144.6613176.6113
Shared Matting [4]20107.441.44756.75914.14696.71427.66942.92940.05958.17883.02778.916
Comprehensive Samplings [10]20085.038.43628.29214.45783.85216.84227.29637.07547.38789.96878.455
KNN Matting [1]201211.579.351398.711366.2414112.81834.601164.171254.441084.3812112.1311128.8611
Spectral Mattings [6]201213.499.0514125.291416.049214.421359.611474.551477.2514122.8914162.7314188.1614
Sparse Samplings [14]20163.333.20231.92312.34385.52317.66324.64237.78639.87471.05469.943
Deep Matting [16]20162.228.67121.10141.801344.81116.57121.04130.57130.94146.40150.271
Self-Adaptive [17]20164.234.71532.21411.022101.76718.61426.53431.23240.01575.29570.684
Information-Flow [20]20178.554.681067.361027.9311100.44626.93855.241055.541160.151066.35279.337
Bayesian Matting [2]200112.198.131395.951138.5412170.761259.511380.721367.6411122.0812135.5912179.2512
Robust Matting [12]20078.867.711149.02719.876109.79744.101162.18950.54881.4910118.0910115.969
Refine Edge [15]20137.250.31551.81815.853125.25940.731045.38746.43752.225118.8811103.307
Closed Form [7]20085.945.04441.69424.198159.471130.09736.85639.20246.973104.076110.268
Learning Based [13]20094.544.97343.75521.907143.541026.50433.89440.44445.93290.35481.222
Nonlocal mattings [5]201111.756.768113.901333.9711241.311450.741277.141268.371276.439159.0813181.1413
Shared Matting [4]20107.855.84779.09919.785107.45632.94860.78851.479122.3313107.34797.226
Comprehensive Samplings [10]20085.056.79937.36316.96487.64224.04334.32543.69564.637110.50893.354
KNN Matting [1]201210.795.8912112.701266.1214113.10837.18971.261160.031098.4211110.879138.6611
Spectral Mattings [6]201213.4116.8714157.371426.129218.941376.221488.741491.6914148.7014169.3314193.4614
Sparse Samplings [14]20162.943.32234.66214.78289.79319.23229.92244.59651.94485.44381.773
Deep Matting [16]20162.231.29123.16153.021344.93118.20124.36132.04137.60150.48154.761
Self-Adaptive [17]20164.551.29646.56614.351106.71527.55531.98339.35354.80695.90595.735
Information-Flow [20]20178.366.861086.011030.9910104.14429.25664.321082.921369.33869.712117.3810
Bayesian Matting [2]200111.835.931265.24109.288104.681327.611135.041446.411160.341388.4613121.9113
Robust Matting [12]20076.322.32732.7327.26347.271116.04618.72834.47524.96653.42942.406
Refine Edge [15]20135.414.41144.0876.30240.69618.36816.53434.93617.83253.431043.608
Closed Form [7]20084.316.37241.2957.67535.14514.59316.40332.43317.37151.40647.7110
Learning Based [13]20094.017.29442.9967.88734.03314.44217.07634.37418.37346.02239.463
Nonlocal mattings [5]201112.629.0711110.761214.1013120.011429.821332.691259.911336.2212108.431496.8212
Shared Matting [4]20107.422.72859.5997.72645.921020.34922.35938.02929.56947.45338.192
Comprehensive Samplings [10]20086.922.78939.97413.831233.11215.61417.51737.09725.72753.33845.049
KNN Matting [1]201211.736.1613115.081325.581444.09827.801231.611156.081234.801187.531275.3611
Spectral Mattings [6]201212.165.5014131.74144.781103.611239.031434.391363.211464.401487.4411122.5314
Sparse Samplings [14]20166.921.78644.8589.86934.07416.94716.55537.40825.84851.77742.987
Deep Matting [16]20162.916.79325.93113.121126.74115.64514.58131.85119.25445.03137.271
Self-Adaptive [17]20163.918.76538.8937.37445.75913.36115.44232.26220.12549.15440.484
Information-Flow [20]20178.828.001078.551110.641041.13722.561028.711042.891029.751049.89541.445
Bayesian Matting [2]200112.246.961377.271011.328111.601333.851339.541456.861075.5014102.7013151.6014
Robust Matting [12]20077.426.36837.7338.35552.071119.18721.45837.08628.55857.781047.958
Refine Edge [15]20134.615.24147.3586.58240.54619.40816.77335.74418.24254.11843.434
Closed Form [7]20084.917.25342.8067.11335.79516.70617.10533.94318.08152.82748.9410
Learning Based [13]20094.118.97445.1477.45434.62216.32518.09636.77519.50448.00239.162
Nonlocal mattings [5]201112.129.6710114.941214.8411121.991431.031233.661261.341337.0411117.021493.1812
Shared Matting [4]20107.924.21768.2599.00749.951021.88924.25942.74937.141251.50439.513
Comprehensive Samplings [10]20086.826.80935.36215.591234.80315.73218.29739.17827.81755.93948.739
KNN Matting [1]201211.238.8512120.081326.541444.01728.251131.831157.581135.781085.371275.3411
Spectral Mattings [6]201211.972.4514155.21145.031104.951244.501435.841364.961466.661385.2311130.3213
Sparse Samplings [14]20165.223.66639.99411.54935.17416.17416.48238.93727.46651.93544.555
Deep Matting [16]20162.517.21224.74121.031327.68114.33114.98131.41119.05345.10136.961
Self-Adaptive [17]20165.120.61540.2058.93646.73915.80317.03432.95221.69552.15644.906
Information-Flow [20]20179.134.101186.711113.951045.99823.261031.001061.241230.42949.38344.997
Bayesian Matting [2]200113.268.3213110.781129.7014119.541347.661352.241467.6013118.011498.3413148.1714
Robust Matting [12]20078.032.72953.1269.85556.831125.401027.41842.73636.49962.551052.276
Refine Edge [15]20134.015.82157.5786.58240.25620.67718.04237.30419.11155.24544.184
Closed Form [7]20084.118.75348.3047.76336.78419.57518.21435.50219.15257.13753.957
Learning Based [13]20093.921.35450.9157.84435.59218.92319.37639.75521.31451.66341.993
Nonlocal mattings [5]201111.731.258129.131217.9610123.491431.741236.131362.321138.8211127.061491.3712
Shared Matting [4]20108.227.95793.58910.60755.321025.04927.72948.28965.061258.63841.342
Comprehensive Samplings [10]20086.934.621042.66316.53935.69319.23421.26744.33833.28860.01955.618
KNN Matting [1]201210.842.7112130.141326.501343.95729.191133.061059.221038.041082.471176.1611
Spectral Mattings [6]201211.781.1414183.76145.491108.671254.311435.811267.311270.451382.7412128.4113
Sparse Samplings [14]20164.827.39641.55212.15836.95517.04218.19343.94732.67654.81449.265
Deep Matting [16]20162.418.72228.24123.651228.18115.36116.66131.80120.98347.83136.951
Self-Adaptive [17]20166.125.66556.94710.23649.29819.97618.71537.00325.31555.89657.1110
Information-Flow [20]20179.241.0911106.611018.031149.30924.59835.491194.921432.86749.22256.929
Bayesian Matting [2]200111.81.85123.58100.12913.33120.74122.31141.47114.51138.011216.2313
Robust Matting [12]20077.60.8990.9420.0734.82110.2570.8880.7761.58103.89103.9210
Refine Edge [15]20136.20.5541.4880.0623.9980.3480.5470.7050.8043.8093.167
Closed Form [7]20083.70.4911.3560.0962.8820.1930.3540.6110.6122.7543.388
Learning Based [13]20093.60.5031.4270.0972.8830.1820.3430.6840.6432.3722.432
Nonlocal mattings [5]201112.91.56118.17130.701320.97141.11131.88132.51142.351211.171413.3712
Shared Matting [4]20108.10.7982.7390.0954.50100.3591.00100.9191.4392.9462.946
Comprehensive Samplings [10]20086.80.7471.0630.19103.1550.2250.3960.8681.2473.6583.409
KNN Matting [1]201211.22.28136.40121.22143.4760.61111.02111.73121.94116.42117.7311
Spectral Mattings [6]201212.57.031413.25140.08415.36132.09141.84122.45136.59148.711319.9714
Sparse Samplings [14]20165.90.6661.3050.1183.0940.2560.3850.8671.0462.9472.765
Deep Matting [16]20162.70.5020.4810.32121.1510.2240.2510.6330.5711.8411.841
Self-Adaptive [17]20163.80.5851.1540.0614.4390.1610.3320.6220.8752.9152.664
Information-Flow [20]20178.21.10103.94110.27113.9070.41100.9791.10101.3882.4632.613
Bayesian Matting [2]200112.63.12135.07110.22916.65131.34133.03142.501111.601418.031425.6314
Robust Matting [12]20078.81.3491.2640.1566.19110.50101.42100.9982.38105.21105.4410
Refine Edge [15]20135.20.6631.6280.0813.9570.4270.6370.7640.9244.0583.163
Closed Form [7]20084.20.6121.4660.1133.2430.2760.4550.6930.8023.2943.938
Learning Based [13]20093.60.6641.5970.1143.2220.2640.4220.7850.8432.9632.872
Nonlocal mattings [5]201112.31.62118.93130.781221.62141.22122.06122.77132.501113.521312.3012
Shared Matting [4]20108.81.0373.5190.1775.77100.4891.51111.6292.57124.0473.787
Comprehensive Samplings [10]20086.91.0980.9520.26103.6360.2750.4860.9861.7184.6594.429
KNN Matting [1]201210.33.05126.83121.32143.4940.70111.1191.96102.2896.30118.5511
Spectral Mattings [6]201212.19.361420.54140.11216.55123.15142.36132.91147.89138.871222.4413
Sparse Samplings [14]20165.20.7961.1830.1883.5250.2630.4440.9871.2963.3953.365
Deep Matting [16]20162.20.5610.4711.05131.1810.1910.2710.6710.5711.9011.991
Self-Adaptive [17]20164.80.7651.3450.1254.9490.2420.4330.6921.1253.6663.626
Information-Flow [20]20178.01.62104.61100.42114.7280.4681.1082.60121.5472.5623.194
Bayesian Matting [2]200113.46.711311.09123.971419.88133.00135.18144.081337.451416.501425.0814
Robust Matting [12]20079.82.40102.5480.2367.65111.18112.70121.5784.16117.26117.1510
Refine Edge [15]20133.90.8122.1860.1213.9150.5460.8470.9031.1424.5153.392
Closed Form [7]20084.20.8831.7040.2473.6440.4640.7250.8521.2034.3645.216
Learning Based [13]20093.61.0341.8850.2143.6230.4330.6121.0051.3244.2733.743
Nonlocal mattings [5]201111.21.76811.85131.101122.25141.29122.46103.00112.88816.361312.2012
Shared Matting [4]20108.91.7476.0290.2787.46100.8592.70112.2697.65126.3595.055
Comprehensive Samplings [10]20087.22.2891.5530.3293.9760.5450.7361.3563.0196.59106.119
KNN Matting [1]20129.44.24127.84111.32123.5220.88101.3882.34103.04106.1789.8711
Spectral Mattings [6]201211.913.171431.38140.18218.16125.33142.78133.741210.24139.191222.9213
Sparse Samplings [14]20164.81.3751.3220.2354.0270.3520.6531.4372.1974.6764.664
Deep Matting [16]20162.20.6410.6011.34131.2210.2210.3710.7510.7812.2712.211
Self-Adaptive [17]20166.11.4262.4970.1935.4890.5570.6640.9941.9465.2076.048
Information-Flow [20]20178.42.67116.25100.71105.3280.5781.5495.32141.9252.7725.837
  1. city
  2. rain
  3. concert
  4. flowers
  5. snow
  6. Slava
  7. Vitaliy
  8. Artem
  9. juneau
  10. woods
  1. Source
  2. Trimap
  3. BM [2]
  4. RM [12]
  5. RE [15]
  6. CF [7]
  7. LB [13]
  8. NlM [5]
  9. ShM [4]
  10. CS [10]
  11. KNN [1]
  12. SpM [6]
  13. SpSM [14]
  14. DM [16]
  15. SAM [17]
  16. IFM [20]
0 %
 
Note: Make sure you are using the latest version of your web browser (we recommend to use chromium-based web browsers)

Integral Plots

Subjective comparison

We carried out subjective comparison of 13 matting methods using Subjectify.us platform. We applied matting methods to videos from our dataset and then uploaded videos containing extracted foreground objects and ground-truth sequences to Subjectify.us. The platform hired study participants and showed them these videos in pairwise fashion. For each pair, participants were asked to choose the video with better visual quality or indicate that they are approximately equal. Each study participant compared 30 pairs including 4 hidden quality-control comparisons between ground truth and a low-quality method; answers of 23 participants were rejected, since they failed at least one quality-control question. In total 10556 answers from 406 participants were collected. Bradley-Terry [18] and Crowd Bradley-Terry [19] models were used to convert pairwise comparisons to subjective ranks. The study report generated by the platform is shown below.

Multidimensional Analysis

References

[1]Qifeng Chen, Dingzeyu Li, and Chi-Ke­ung Tang. KNN mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 35(9):2175–2188, 2013. [ doi ,  pro­ject page ]
[2]Yung-Yu Chuang, Brian Cur­less, David H. Salesin, and Richard Szeliski. A bayesian ap­proach to dig­i­tal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), vol­ume 2, pages II-264–II-271, 2001. [ doi ,  pro­ject page ,  code ]
[3]Mikhail Ero­feev, Yury Git­man, Dmitriy Va­tolin, Alexey Fe­dorov, Jue Wang. Per­cep­tu­ally Mo­ti­vated Bench­mark for Video Mat­ting. British Ma­chine Vi­sion Con­fer­ence (BMVC), pages 99.1–99.12, 2015. [ doi ,  pdf ,  pro­ject page ]
[4]Ed­uardo S.L. Gastal and Manuel M. Oliveira. Shared sam­pling for real-time al­pha mat­ting. Com­puter Graph­ics Fo­rum, 29(2):575–584, 2010. [ pro­ject page ]
[5]Philip Lee and Ying Wu. Non­lo­cal mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 2193–2200, 2011. [ code ]
[6]A. Levin, A. Rav Acha, and D. Lischin­ski. Spec­tral mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(10):1699–1712, 2008. [ doi ,  pro­ject page ]
[7]Anat Levin, Dani Lischin­ski, and Yair Weiss. A closed-form so­lu­tion to nat­ural im­age mat­ting. Trans­ac­tions on Pat­tern Analy­sis and Ma­chine In­tel­li­gence (TPAMI), 30(2):228–242, 2008. [ doi ,  code ]
[8]Christoph Rhe­mann, Carsten Rother, Jue Wang, Margrit Gelautz, Push­meet Kohli, and Pamela Rott. A per­cep­tu­ally mo­ti­vated on­line bench­mark for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1826–1833, 2009. [ doi ]
[9]E. Shahrian and D. Ra­jan. Weighted color and tex­ture sam­ple se­lec­tion for im­age mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 718–725, 2012. [ doi ,  code ]
[10]E. Shahrian, D. Ra­jan, B. Price, and S. Co­hen. Im­prov­ing im­age mat­ting us­ing com­pre­hen­sive sam­pling sets. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 636–643, 2013. [ doi ,  code ]
[11]Karen Symonyan, Sergey Gr­ishin, Dmitriy Va­tolin, and Dmitriy Popov. Fast video su­per­res­o­lu­tion via clas­si­fi­ca­tion. In­ter­na­tional Con­fer­ence on Im­age Pro­cess­ing (ICIP), pages 349–352, 2008. [ doi ]
[12]Jue Wang and Michael F. Co­hen. Op­ti­mized color sam­pling for ro­bust mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), pages 1–8, 2007. [ doi ,  pro­ject page ]
[13]Yuan­jie Zheng and C. Kamb­hamettu. Learn­ing based dig­i­tal mat­ting. In In­ter­na­tional Con­fer­ence on Com­puter Vi­sion (ICCV), pages 889–896, 2009. [ doi ,  code ]
[14]Lev­ent Kara­can, Aykut Er­dem, Erkut Er­dem. Al­pha Mat­ting with KL-Di­ver­gence Based Sparse Sam­pling, IEEE Trans­ac­tions on Im­age Pro­cess­ing, 2017.
[15]http://​www.adobe.com/​en/​prod­ucts/​af­ter­ef­fects.html, Re­fine Edge tool in Adobe Af­ter Ef­fects CC.
[16]Ning Xu, Brian Price, Scott Co­hen, and Thomas Huang. Deep Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017
[17]Guangy­ing Cao, Jian­wei Li, Xi­aowu Chen, Zhiqiang He. Patch-based self-adap­tive mat­ting for high-res­o­lu­tion im­age and video, The Vi­sual Com­puter, 1-15, 2017.
[18]Ralph Al­lan Bradley and Mil­ton E Terry. Rank analy­sis of in­com­plete block de­signs: I. the method of paired com­par­isons. Bio­metrika, 39(3/​4):324–345, 1952.
[19]Chen, Xi, et al. Pair­wise rank­ing ag­gre­ga­tion in a crowd­sourced set­ting. Pro­ceed­ings of the sixth ACM in­ter­na­tional con­fer­ence on Web search and data min­ing. ACM, 2013.
[20]Ya­giz Ak­soy, Tunc Ozan Ay­din and Marc Polle­feys. De­sign­ing Ef­fec­tive In­ter-Pixel In­for­ma­tion Flow for Nat­ural Im­age Mat­ting. In Com­puter Vi­sion and Pat­tern Recog­ni­tion (CVPR), 2017. [ doi ,  code ]
Up